I am trying capture several images from a RTP stream in order to make a timelapse video, I would like the images show a on-screen time label. I have been using this command:
vlc.exe rtsp://192.168.1.49/live/main --video-filter=scene --marq-marquee=Time:%H:%M:%S --marq-position=9 --sub-filter=marq --scene-prefix=Timelapse- --scene-format=jpg --scene-path="c:\Timelapse" --scene-ratio 200 --sout-x264-lookahead=10 --sout-x264-tune=stillimage --run-time 43200
I can see the time label in the VLC interface, but when the images are saved they do not show this marquee.
Any suggestion?
Thanks in advance
May be it's too late but i spend long time to find the solution:
This is part for loading module marq and adding overlay with time:
--sub-filter=marq --marq-marquee='%Y-%m-%d %H:%M:%S' --marq-color=32768 --marq-position=20 --marq-opacity=25 --marq-refresh=-1 --marq-size=15
also you need to add module to transcode:
#transcode{vcodec=h264,vb=2000,acodec=mpga,ab=128,channels=2,samplerate=44100,sfilter=marq}:duplicate{dst=http{dst=:8080/stream.wmv},dst=file{dst=stream.mp4,no-overwrite}}'
This is my full code:
cvlc v4l2:///dev/video0 --quiet-synchro --no-osd --sub-filter=marq --marq-marquee='%Y-%m-%d %H:%M:%S' --marq-color=32768 --marq-position=20 --marq-opacity=25 --marq-refresh=-1 --marq-size=15 :v4l2-standard= :input-slave=alsa://hw:0,0 :live-caching=200 :sout='#transcode{vcodec=h264,vb=2000,acodec=mpga,ab=128,channels=2,samplerate=44100,sfilter=marq}:duplicate{dst=http{dst=:8080/stream.wmv},dst=file{dst=stream.mp4,no-overwrite}}' :sout-keep
Vlc stream via http and recording video fo file with timestamp overlay.
Hope it will help to other people who are looking for a way to get it.
Related
i'm trying to use gst to generate an hls video from frames within an existing pipeline. once i get the frame as a numpy array i use the following to create the ts and m3u8 file :
appsrc emit-signals=True do-timestamp=true is-live=True
caps={DEFAULT_CAPS}".format(**locals()) !
"queue" !
"videoconvert" !
"x264enc" !
"mpegtsmux" !
f"hlssink location={playlist}.%04d.ts " !
f"playlist-location={playlist}.m3u8"])
where default caps = "video/x-raw,format={VIDEO_FORMAT},width={WIDTH},height={HEIGHT},framerate={FPS_STR}".format(**locals())
here's an example of the m3u8 file :
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-ALLOW-CACHE:NO
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-TARGETDURATION:15
#EXTINF:15.000000953674316,
20201014_103647.0000.ts
#EXTINF:15.000000953674316,
20201014_103647.0001.ts
#EXTINF:15.000000953674316,
20201014_103647.0002.ts
#EXTINF:7.8000001907348633,
20201014_103647.0003.ts
#EXT-X-ENDLIST
it's playing fine with my ubuntu video player and on chrome but not on safari and firefox. i've tried changing the pipeline a little but nothing worked and don't really know what's the problem.
does anyone have any idea ?
following the advice in the comments i tried changing the profile but it didn't change anything.
I also found that it adding a silent audio could resolve the problem cause the browser might be expecting that.
EDIT
so the combo audio + profile makes it work but since i'm using appsrc to get the frames i don't know how long the video is gonna be so how can i generate an audio without that information ?
thanks
So to make it work i set the profile to high and added an audio over the video.
We are developing a stop motion app for kids and schools.
So what we have is:
A sequence of images and audio files (no overlapping audio, in v1. But there can be gaps between them)
What we need to do:
Combine the images to a video with a frame rate between 1-12 fps
Add multiple audio files at a given start times
encode with H265 to mp4 format
I would really like to avoid maintaining a VM or Azure batch jobs running ffmpeg jobs if possible.
Is there any good frameworks or third party APIs?
I have only found transloadit as the closes match but they don't have the option to add multiple audio files.
Any suggestions or experience in this area is very appreciated.
You've mentionned FFmpeg in your tag and it is a tool that checks all the boxes.
For the first part of your project (making a video from images) you should check this link. To sum up, you'll use this kind of command:
ffmpeg -r 12 -f image2 -i PATH_TO_FOLDER/frame%d.png PATH_TO_FOLDER/output.mp4
-r 12 being your framerate, here 12. You control the output format with the file extension. To control the video codec check out this link, you'll need to use the option -c:v libx265before the filename of your output.
With FFmpeg you add audio as you add video, with -i option followed by your filename. If you want to cut audio you should seek in your audio with -ss -t two options good for that. If you want and audio to start at a certain point, check out -itoffset, you can find a lot of examples.
I just want some confirmation, because I have the sneaking suspicion that I wont be able to do what I want to do, given that I already ran into some errors about ffmpeg not being able to overwrite the input file. I still have some hope that what I want to do is some kind of exception, but I doubt it.
I already used ffmpeg to extract a specific frame into its own image file, I've set the thumbnail of a video with an existing image file, but I can't seem to figure out how to set a specific frame from the video as the thumbnail. I want to do this without having to extract the frame into a separate file and I don't want to create an output file, I want to edit the video directly and change the thumbnail using a frame from the video itself. Is that possible?
You're probably better off asking it in IRC zeronode #ffmpeg-devel.
I'd look at "-ss 33.5" or a more precise filter "-vf 'select=gte(n,1000)'" both will give same or very similar result at 30 fps video.
You can pipe the image out to your own process via pipe if you like of course without saving it : "ffmpeg ... -f jpeg -|..."
I'm trying to convert an animated GIF to video with ffmpeg, but there's a strange problem: the time delays of each frame seem to be off by one frame.
For example, if the frame #1 is supposed to be shown for 2000 ms and the frames from #2 to #10 are supposed to be shown for 100 ms each, in the resulting video it immediately skips to the frame #2 which is shown for 2000 ms instead :P
Is this some kind of a bug? Or am I doing something wrong?
Here's my command line:
ffmpeg –i Mnozenie_anim_deop.gif Mnozenie_anim.mp4
(Aside: why doesn't the "-" show up in code blocks unless I replace it with "–"?)
so nothing extraordinary, just the defaults. (Unless this is the root of the problem? Maybe my defaults are bad, and I need to specify some magic options?)
This problem seems to appear for any video formats except MKV, and when I play these files in mplayer, they all behave that way except MKV.
But when I open them in kdenlive (a non-linear video editing program), the problem appears in all of them, including MKV (which is strange, because it plays back just fine in mplayer :q ).
I tried converting the same exact file with this online converter here:
https://ezgif.com/gif-to-mp4
and there is no problem with its output – it plays back fine both in mplayer and when imported to kdenlive, so I guess they must have been using some magic command line options that I'm missing.
Any ideas what can be wrong and how to track down the culprit?
Edit: Here's a sample animated GIF file I'm trying to convert:
http://nauka.mistu.info/Matematyka/Algebra/Szeregi/Mnozenie_anim.gif
and the MP4 file that I generated from it which demonstrates this problem:
http://sasq.comyr.com/Stuff/Mnozenie_anim.mp4
As you can see, the fade in starts prematurely but stops for a couple of seconds instead of waiting for a couple of seconds BEFORE the fade in begins.
I have been looking for a way to convert a sequence of PNGs to a video. There are ways to do that using the CONCAT function within FFmpeg and using a script.
The problem is that I want to show certain images longer than others. And I need it to be accurate. I can set a duration (in seconds) in the script file. But I need it to be frame-accurate. So far I have not been successful.
This is what I want to make:
Quicktime video with transparancy (Prores4444 or other codec that supports transparancy + alpha channel)
25fps
This is what I have: [ TimecodeIn - TimecodeOut in destination video ]
img001.png [0:00:05:10 - 0:00:07:24]
img002.png [0:00:09:02 - 0:00:12:11]
img003.png [0:00:15:00 - 0:00:17:20]
...
img120.png [0:17:03:11 - 0:17:07:01]
Of course this is not the format of the script file. Just an idea about what kind of data I am dealing with. The PNG-imagefiles are subtitles I generate elsewhere in my application. I would like to be able to export the subtitles as a transparent movie that I can easily import in my video editing software.
I also have been thinking of using blank transparent images I will use as spacers, between the actual subtitle images.
After looking around I think this might help:
On the FFMPEG site they explain about making a timed slideshow
In the Concat demuxer section they talk about making a slideshow, based on a text file, with references to the image files and the duration of the image.
So, I create all the PNG images I need. These images have the subtitle text. Each image holds one subtitle page.
For the moments I want to hide the subtitle, I use a blank PNG.
I generate a text file as explained on the FFMPEG website.
This text file will reference to all the PNGs. For the duration I just calculate the outcue - incue. Easy... I think...