Capture video using FFMPEG without frozen image effect - ffmpeg

Anyone could help me? Have been trying to record video from an RTSP server using FFMPEG but somehow the video result has many frozen images (couldn't be used for any people detection) - the people look similar to this:
Here is the code I used:
ffmpeg -i rtsp://10.10.10.10/encoder1 -b:v 1024k -s 640x480 -an -t 60 -r 12.5 output.mp4
What I have done so far?
- Recorded the video in smaller dimension instead of original one
- No audio and lower FPS
- Even only record from two IP sources on a machine
But still didn't get any luck yet. Anyone ever experience this?

Related

How to remove a frame with ffmpeg without re-encoding?

I am making a datamoshing program in C++, and I need to find a way to remove one frame from a video (specifically, the p-frame right after a sequence jump) without re-encoding the video. I am currently using h.264 but would like to be able to do this with VP9 and AV1 as well.
I have one way of going about it, but it doesn't work for one frustrating reason (mentioned later). I can turn the original video into two intermediate videos - one with just the i-frame before the sequence jump, and one with the p-frame that was two frames later. I then create a concat.txt file with the following contents:
file video.mkv
file video1.mkv
And run ffmpeg -y -f concat -i concat.txt -c copy output.mp4. This produces the expected output, although is of course not as efficient as I would like since it requires creating intermediate files and reading the .txt file from disk (performance is very important in this project).
But worse yet, I couldn't generate the intermediate videos with ffmpeg, I had to use avidemux. I tried all sorts of variations on ffmpeg -y -ss 00:00:00 -i video.mp4 -t 0.04 -codec copy video.mkv, but that command seems to really bug out with videos of length 1-2 frames - while it works for longer videos no problem. My best guess is that there is some internal checker to ensure the output video is not corrupt (which, unfortunately, is exactly what I want it to be!).
Maybe there's a way to do it this way that gets around that problem, or better yet, a more elegant solution to the problem in the first place.
Thanks!
If you know the PTS or data offset or packet index of the target frame, then you can use the noise bitstream filter. This is codec-agnostic.
ffmpeg -copyts -i input -c copy -enc_time_base -1 -bsf:v:0 noise=drop=eq(pos\,11291) out
This will drop the packet from the first video stream stored at offset 11291 in the input file. See other available variables at http://www.ffmpeg.org/ffmpeg-bitstream-filters.html#noise

Reduce video processing time using FFMpeg?

Users of my app upload videos to my server and I process them to create different qualities, thumbnails and gifs etc. Which are then useful for mobile and web apps. It takes almost 15-20 minutes for each video to be processed. I am using ffmpeg. How can I reduce my processing time ?
I can't comment so I ask here.
15-20 is to make thumbnail/gif from a video? If so, that's awfully a lot.
If you want HQ lossless then consider using x264 encoder with lossless_ultrafast preset to make videos.
ffmpeg -f x11grab -r 25 -s 1080x720 -i :0.0 -vcodec libx264 -vpre ultrafast yourfile.mkv
If possible, use GPU to convert.
I might be wrong, but FFmpeg by default uses 1 thread. You could put multiple instances running to solve it.

Pushing on-fly transcoded video to embeded http results with no seekbar

I'm trying to achieve a simple home-based solution for streaming/transcoding video to low-end machine that is unable to play file properly.
I'm trying to do it with ffmpeg (as ffserver will be discontinued)
I found out that ffmpeg have build in http server that can be used for this.
The application Im' testing with (for seekbar) is vlc
I'm probably doing something wrong here (or trying to do something that other does with other applications)
My ffmpeg code I use is:
d:\ffmpeg\bin\ffmpeg.exe -r 24 -i "D:\test.mkv" -threads 2 -vf
scale=1280:720 -c:v libx264 -preset medium -crf 20 -maxrate 1000k
-bufsize 2000k -c:a ac3 -seekable 1 -movflags faststart -listen 1 -f mpegts http://127.0.0.1:8080/test.mpegts
This code also give me ability to start watching it when I want (as opposite to using rtmp via udp that would start video as soon as it transcode it)
I readed about moving atoom thing at file begging which should be handled by movflags faststart
I also checked the -re option without any luck, -r 25 is just to suppress the Past duration 0.xx too large warning which I read is normal thing.
test file is one from many ones with different encoder setting etc.
The setting above give me a seekbar but it doesn't work and no overall duration (and no progress bar), when I switch from mpegts to matroska/mkv I see duration of video (and progress) but no seekbar.
If its possible with only ffmpeg I would prefer to stick to it as standalone solution without extra rtmp/others servers.
after some time I get to point where:
seek bar is a thing on player side , hls in version v6 support pointing to start item as v3 start where ever it whats (not more than 3 items from end of list)
playback and seek is based on player (safari on ios support it other dosn't) also ffserver is no needed to push the content.
In the end it work fine without seek and if seek is needed support it on your end with player/js.player or via middle-ware like proxy video server.

Possible to Combine Live m3u8 stream with PIP overlay from WebRTC source?

Can someone tell me what server-side technology (perhaps ffmpeg), one could use in order to:
1) display this full-screen live-streaming video:
http://aolhdshls-lh.akamaihd.net/i/gould_1#134793/master.m3u8
2) and overlay it in the lower-right corner with a live video coming from a webRTC video-chat stream?
3) and send that combined stream into a new m3u8 live-stream
4) Note that it needs to be a server-side solution - - - cannot launch multiple video players in this case (needs to pass the resulting stream to SmartTV's which only have one video-decoder at a time)
The closest example I've found so far is this article:
https://trac.ffmpeg.org/wiki/Create%20a%20mosaic%20out%20of%20several%20input%20videos
Which isn't really live, nor is it really doing overlays.
any advice is greatly appreciated.
Let me clear what you want in this case :
Input video is HLS streaming from webRTC : What about delay? is dealy important thing in your work?
Overlay image into video : This will need that decoding input video, and filtering it, encoding again. so it needs a lot of cpu resource and even more if input video is 1080p.
Re-struct new HLS format : You must put it lot of encoding option to make sure that ts fragment works well. most important thing is GOP size and ts duration.
You need a web server to provide m3u8 index file. you can use nginx, apache.
What i tell you now in this answer is that ffmpeg command line, which making overlay from input HLS streaming and re-make ts segments.
Following command-line will do what you want in step 1 to step 3 :
ffmpeg \
-re -i "http://aolhdshls-lh.akamaihd.net/i/gould_1#134793/master.m3u8" \
-i "[OVERLAY_IMAGE].png" \
-filter_complex "[0:v][1:v]overlay=main_w:main_h[output]" \
-map [output] -0:a -c:v libx264 -c:a aac -strict -2 \
-f ssegment -segment_list out.list out%03d.ts
This is basic command line that overlay image from your input HLS streaming then creating ts segment and index file.
I don't have any further experience with HLS so it can be done without any tuning option but maybe you should tune it for you work. and also you should search a little bit for web server to provide m3u8 but it won't be hard.
GOP size(-g) and its duration(segment_tim), as i said, will be key point of your tuning.

ffmpeg convert without loss quality

I need convert all videos to my video player (in website) when file type is other than flv/mp4/webm.
When I use: ffmpeg -i filename.mkv -sameq -ar 22050 filename.mp4 :
[h264 # 0x645ee0] error while decoding MB 22 1, bytestream (8786)
My point is, what I should do, when I need convert file type: .mkv and other(not supported by jwplayer) to flv/mp4 without quality loss.
Instead of -sameq (removed by FFMpeg), use -qscale 0 : the file size will increase but it will preserve the quality.
Do not use -sameq, it does not mean "same quality"
This option has been removed from FFmpeg a while ago. This means you are using an outdated build.
Use the -crf option instead when encoding with libx264. This is the H.264 video encoder used by ffmepg and, if available, is the default encoder for MP4 output. See the FFmpeg H.264 Video Encoding Guide for more info on that.
Get a recent ffmpeg
Go to the FFmpeg Download page and get a build there. There are options for Linux, OS X, and Windows. Or you can follow one of the FFmpeg Compile Guides. Because FFmpeg development is so active it is always recommended that you use the newest version that is practical for you to use.
You're going to have to accept some quality loss
You can produce a lossless output with libx264, but that will likely create absolutely huge files and may not be decodeable by the browser and/or be supported by JW Player (I've never tried).
The good news is that you can create a video that is roughly visually lossless. Again, the files may be somewhat large, but you need to make a choice between quality and file size.
With -crf choose a value between 18 to around 29. Choose the highest number that still gives an acceptable quality. Use that value for your videos.
Other things
Add -movflags +faststart. This will relocate the moov atom from the end of the file to the beginning. This will allow the video to begin playback while it is still being downloaded. Otherwise the whole video must be completely downloaded before it can begin playing.
Add -pix_fmt yuv420p. This will ensure a chroma subsampling that is compatible for all players. Otherwise, ffmpeg, by default and depending on several factors, will attempt to minimize or avoid chroma subsampling and the result is often not playable by non-FFmpeg based players.
convert all mkv to mp4 without quality loss (actually it is only re-packaging):
for %a in ("*.mkv") do ffmpeg.exe -i "%a" -vcodec copy -acodec copy -scodec mov_text "%~na.mp4"
For me that was the best way to convert it.
ffmpeg -i {input} -vcodec copy {output}
I am writing a script in python that appends multiple .webm files to one .mp4. It was taking me 10 to 20 seconds to convert one chunk of 5 seconds using:
ffmpeg -i {input} -qscale 0 copy {output}
There's some folders with more than 500 chunks.
Now it takes less than a second per chunk. It took me 5 minutes to convert a 1:20:00 long video.
For MP3, the best is to use -q:a 0 (same as '-qscale 0'), but MP3 has always loss quality.
To have less loss quality, use FLAC
See this documentation link

Resources