I am trying to capture frames using ffmpeg from a stream of video , which i am saving locally to my folder,
I need to know the exact time-stamp at which the frame is being captured
what i have tried so far is :
ffmpeg -i rtsp://ipaddress/axis-media/media.amp?camera=1 -an -vf showinfo %10d.jpg 2> res.txt
which i got from the source :
get-each-frame-time-stamp
this works fine too, the res.txt contains the time-elapsed of each frame since ffmpeg started(if my understanding is not wrong) ,
What i need is to get this time appended to the image files names which are being created or some other ways so that time-stamp information could be stored within the image.
any kind of help would be really appreciated .
Related
I see lots of questions asking about how to add EXIF tags to MP4 another other media files with ffmpeg. I am not interested in doing this. I currently have an exiftool command that I am running after the fact, but this takes some time because it has to rewrite the entire file.
What I would like to do instead is to add the tags to the MP4 file while I am originally creating it so that I only have to write the file once.
I found this page on creating metadata, but it does not list any of the metadata I want to set. In am trying to set all the timestamp tags, making sure they are properly set to UTC when applicable as is the case with some of the track/media timestamps.
Update: I see this question has attracted a downvote and a vote to close due to claims about it being off-topic as it allegedly is not about programming. I am using ffmpeg in a bash script which does some automation, so I'm not sure why this claim is being made. There are certainly other similar questions (just look at a few with the ffmpeg tag).
Are you looking for this?
ffmpeg -y -i in.mp4 -metadata creation_time="2015-10-21 07:28:00" -map 0 -c copy out.mp4
Use -metadata creation_time="$(date +'%F %T')" to record the time when your command is launched.
I was playing around with FFmpeg, trying to embed a tag into an mp4 file and remove it later and get back to the original file, when I noticed that the files seemed to be different. Trying to isolate the issue, I tried to do a simple passthrough copy like so:
ffmpeg -i file1.mp4 -codec copy file2.mp4
The md5 sums of both files are different. Am I missing any options/flags to make an exact replica of file1?
Well, I'm using this project to create a Telegram bot which receives URL of .mp4 files, downloads them on server and uploads them to Telegram.
Issue
Everything works fine so far, except converting certain .mp4 files.
For example if I use a sample .mp4 video from https://sample-videos.com/. Then it works fine and converts it successfully.
But if I use a video from some random website which is also simple .mp4 file, it doesn't work and throws this error:
[mov,mp4,m4a,3gp,3g2,mj2 # 0x1932420] Format mov,mp4,m4a,3gp,3g2,mj2 detected only with low score of 1, misdetection possible!
[mov,mp4,m4a,3gp,3g2,mj2 # 0x1932420] moov atom not found
data/720P_1500K_210306701.mp4: Invalid data found when processing input
This really depends on the software that handles the upload.
The moov atom is either located at the beginning, or at the end of the file.
If the software only looks at the first part of the file, and the moov atom is at the end, it will not know how to deal with that file until the file upload is completed.
What you could do, prior to uploading, is move the moov file to the start of the video, it's more likely that the software only checks for the moov atom at the start of the file.
With ffmpeg, the command is:
ffmpeg -i input -c:v copy -c:a copy -movflags faststart output.mp4
That would move it to the start of the file.
You will need to do this for every video though.
I'm looking for a working and not-out-dated script how to record an e.g. rtsp input stream to a local file (mp4) with ffmpeg/libav. If you could point me to one or post one, many thanks in advance. I'm searching for many hours and I haven't got any experience with this topic.
A lot of examples, libs, etc. are outdated, but I want to use ffmpeg >= v3.3.
Any special things I have to consider (when compiling ffmpeg, or when saving local file to iOS device)?
Ffmpeg syntax is very straight forward.
ffmpeg [global_options] {[input_file_options] -i input_url} ... {[output_file_options] output_url} ...
So, if you donĀ“t need to reencode or decode your RTPS video stream, you can simply run:
ffmpeg -i rtsp://your_stream_url your_file_name.format
Where format could be avi, mp4, flv or others, ffmpeg you automatic package your stream to the output file.
More information here.
https://www.ffmpeg.org/ffmpeg.html
Any special things I have to consider (when compiling ffmpeg, or when
saving local file to iOS device)?
Do you need to compile ffmpeg for an specific reason? I belive libav is enable on the executable you could download from the site.
I am running Bento4 Mp4Dash to convert my fragmented video files into MPEG-DASH streaming videos. However I seem to get this error
ERROR: unsupported input file, more than one "traf" box in fragment
but only if I have audio enabled. I have found that if I run -an in FFMPEG (to ignore the audio tracks) my MP4Dash command runs just fine, any ideas as to why this would happen?
I solved the problem by telling ffmpeg to extract the audio from the file I wanted to convert to DASH. Here is my solution in case someone still needs them.
extract audio
generate second video file with no audio
tell Bento4 Mp4Dash to use the video and the audio
For all steps there are tons of explainations how to do it on the internet. Here are some I found:
extract audio with the -vn flag, ffmpeg to extract audio from video
video without audio with the -an flag, Remove audio from video file with FFmpeg
tell Bento4 to use them each by following the advice in the official Usage documentation, section Advanced usage: add both the video and audio file as input and add [type=video] before the video and [type=audio] before the audio file (no space in between)