I was playing around with FFmpeg, trying to embed a tag into an mp4 file and remove it later and get back to the original file, when I noticed that the files seemed to be different. Trying to isolate the issue, I tried to do a simple passthrough copy like so:
ffmpeg -i file1.mp4 -codec copy file2.mp4
The md5 sums of both files are different. Am I missing any options/flags to make an exact replica of file1?
Related
I see lots of questions asking about how to add EXIF tags to MP4 another other media files with ffmpeg. I am not interested in doing this. I currently have an exiftool command that I am running after the fact, but this takes some time because it has to rewrite the entire file.
What I would like to do instead is to add the tags to the MP4 file while I am originally creating it so that I only have to write the file once.
I found this page on creating metadata, but it does not list any of the metadata I want to set. In am trying to set all the timestamp tags, making sure they are properly set to UTC when applicable as is the case with some of the track/media timestamps.
Update: I see this question has attracted a downvote and a vote to close due to claims about it being off-topic as it allegedly is not about programming. I am using ffmpeg in a bash script which does some automation, so I'm not sure why this claim is being made. There are certainly other similar questions (just look at a few with the ffmpeg tag).
Are you looking for this?
ffmpeg -y -i in.mp4 -metadata creation_time="2015-10-21 07:28:00" -map 0 -c copy out.mp4
Use -metadata creation_time="$(date +'%F %T')" to record the time when your command is launched.
So when attaching files to a matroska container they get a name, normally their original name. However when you are running windows and you supply the full path to the file you want to attach, ffmpeg sets that full path as name of the attachment. How can I rename that attachment, preferably in the same command as the attachment process?
Ok, months later I know the solution: Use
-metadata:s:t filename="cover.jpg"
Replace cover.jpg with your desired filename to be stored in the matroska file. If you run more complex stuff with multiple attachments the stream specifier (:s:t) might be different for you, see -map+metadata ffmpeg documentation
I'm looking for a working and not-out-dated script how to record an e.g. rtsp input stream to a local file (mp4) with ffmpeg/libav. If you could point me to one or post one, many thanks in advance. I'm searching for many hours and I haven't got any experience with this topic.
A lot of examples, libs, etc. are outdated, but I want to use ffmpeg >= v3.3.
Any special things I have to consider (when compiling ffmpeg, or when saving local file to iOS device)?
Ffmpeg syntax is very straight forward.
ffmpeg [global_options] {[input_file_options] -i input_url} ... {[output_file_options] output_url} ...
So, if you donĀ“t need to reencode or decode your RTPS video stream, you can simply run:
ffmpeg -i rtsp://your_stream_url your_file_name.format
Where format could be avi, mp4, flv or others, ffmpeg you automatic package your stream to the output file.
More information here.
https://www.ffmpeg.org/ffmpeg.html
Any special things I have to consider (when compiling ffmpeg, or when
saving local file to iOS device)?
Do you need to compile ffmpeg for an specific reason? I belive libav is enable on the executable you could download from the site.
I am trying to capture frames using ffmpeg from a stream of video , which i am saving locally to my folder,
I need to know the exact time-stamp at which the frame is being captured
what i have tried so far is :
ffmpeg -i rtsp://ipaddress/axis-media/media.amp?camera=1 -an -vf showinfo %10d.jpg 2> res.txt
which i got from the source :
get-each-frame-time-stamp
this works fine too, the res.txt contains the time-elapsed of each frame since ffmpeg started(if my understanding is not wrong) ,
What i need is to get this time appended to the image files names which are being created or some other ways so that time-stamp information could be stored within the image.
any kind of help would be really appreciated .
I am making video from images in FFmpeg. I want to know if I can make video from images placed in different folders. Like first image is placed in folder1 and other in folder2 can I use both images in folder1 and folder2 to make a single video having both images in any order. Just want to know can I use images from two different folders to make a single video. if yes. Than how can i do that?
Yes it's possible, you can specify several input on the command (-i) but the more easy in your case as the order doesn't count, is to use the 'concat' ' filter
https://trac.ffmpeg.org/wiki/How%20to%20concatenate%20(join,%20merge)%20media%20files
it will create a video with all images of folder1 and after all folder2, etc...
The concat filter come with 2 methods:
ffmpeg -i concat:file1|file2 etc ....
and
ffmpeg -f concat -i list etc...
for image sequence it look like only the 2nd method work.
so first create a file to describe your content (run touch file and edit it with nano file) and use the following syntax to specify where your file are and which is their naming syntax:
file './folder1/im%03d.jpg'
file './folder2/im%03d.jpg'
save the file (under nano it will be with CTRL+X)
and now run the following command:
ffmpeg -f concat -i list out.mp4
ffmpeg will use your file as input and push sequentially all your image in the encoding process