I have video file split into few chunks. Split done and random file positions, but chunks are large enough.
I need to parse every part with different instances of AVFormatContext. Chunks come one after another in right order. I think there are two options here:
Being able to save and restore AVFormatContext state;
Save video file header (from first chunk) and attach it to every chunk.
I tried both but no success. First approach requires to go too deeply beyond public API of ffmpeg. With second approach I am unable to merge header with new chunk so that ffmpeg can handle it.
Can you help me with this?
Thank you.
It totally depends on the file type. MP4 for example, the header must be completely rewritten, and can not just be copied. Flv the header can probably just be copied, but MUST be split on frame boundary and not randomly. TS could do this, but you would miss a frame at the cut point.
Realistically, the file will need to be reassembled, the split correctly.
Related
I'm writing a player for an RTMP stream using the ffmpeg API. I know the usual way to get the stream info into an input format is with avformat_find_stream_info. And that works. However, because it's RTMP it takes a long time for it to scan enough of the stream to pick up the info. I've played with max_analyze_duration and probesize and it's a bit better, but it still takes 10-15 seconds to load. That's way too long for my application.
But I'm the one making the stream on the other end, so I know exactly what's in it. It seems like it would make more sense for me to tell the input format what the stream info is rather than asking it to search for it. But I can't find any examples of this, and my attempts to use avformat_new_stream with an input format aren't working.
Does anyone know if this is possible? And if so, could you point me in the direction of how?
Thanks!
This is what is known as an XY problem
Yes, you can spoof the sequence header (assuming h.264/aac). But it won't accomplish what you want. What is happening is your RTMP server (reflector) is eating the first GOP. So even if the analyze was done faster, you must first wait for the first video key frame anyway.
You need to configure your RTMP server to send the full GOP (in nginx+rtmp the setting is wait_key on)
I am trying to achieve a way to send large video files through Firefox Send.
Because Firefox Send has a 2.5 GB limit per file that one sends, I need to break up a video file into parts that are each less than 2.5GB.
Is there a relatively simple way to reliably split a video based on data limits using FFmpeg, rather than using duration? (Using duration would be unreliable, because different equal length portions of a video can be different sized)
EDIT 1: I apoligize for the lack of clarity, I was planning on using a Bash script using FFmpeg and ffsend. I was wondering if there is any way to do this through video processing rather than zip compression.
The standard utility split is intended for precisely this sort of thing.
# sender does:
split -b 2500m file.mpg file.mpg__split_
# recipient downloads all the pieces and does:
cat file.mpg__split_* > file.mpg
A disadvantage of this procedure is that the individual parts are not usable.
An advantage is that the final output is identical to the original.
I'm developing a system using ffmpeg to store some ip camera videos.
i'm using the segmentation command for store each 5 minutes a video for camera.
I have a wpf view where i can search historycal videos by dates. In this case i use the ffmpeg command concat to generate a video with the desire duration.
All this work excelent, my question is: it's possible concatenate the current file of the segmentation? i need for example, make a serch from the X date to the current time, but the last file is not generated yet by the ffmpeg. when i concatenate the files, the last one is not showing because is not finish the segment.
I hope someone can give me some guidance on what I can do.
Some video formats can always be playable during the build process. That is, you can make a copy of the unfinished segmentation directly and use it to merge.
I suggest you use flv or ts format to do this. mp4 is not supported. Also note that there is a delay from encoding to actually writing to the disk.
I'm not sure if direct copy will cause some data problems at the end of the segmentation file, but ffmpeg will ignore this part of the data during the merge process, so the merged video should be fine.
Now I used C language and ffmpeg realize a multiplex real-time audio and video to MP4 files of the program and everything works fine, but when in the process of reuse of sudden power failure, the recording is MP4 file is damaged, VLC can not play this file.
I think reason is no call to write the trailer function av_write_trailer , causing index and time stamp information lost, I use araxis merge tool compared the successful call av_write_trailer function of file and a no av_write_trailer to call the damaged files and found two different points:
1. Damaged files in the file header box number value not right
2. The damaged file no end of file.
Now I want to repair after power on my program can automatically repair the damaged files, in Google did not find effective methods.
my train of thought is in the normal recording process saves per second a damaged file is missing two information: box number and end of file, save it to a local file, when writing the MP4 file integrity delete this file after, if power off damaged, then in the next power, read the file and the corresponding information to write the damaged files corresponding position to. But now the problem is that I don't know how to save the number of box and the end of the file, I this is feasible? If possible, what should I do? Looking forward to your reply!
The main cause of MP4 file damage is due to header or trailer not written properly on the file , then , whole file become a junk data. Thus none of the media player able to play the broken mp4 file.
So,
First , broken file has to be repaired before playing the file.
there are some applications and tricks available to repair and get the data back
links are given below :
http://grauonline.de/cms2/?page_id=5 (Windows / Mac)(paid :( )
https://github.com/ponchio/untrunc (Linux based OS)(ofcourse,free!!!)
Second, Manually repairing the corrupt file using HEX editor.
Logic behind this hack :
This hack requires a broken mp4 file and good video file where both videos are captured from the same camera .Also its size should be larger than the broken mp4 file.
Open both video file in any HEX editor. Copy trailer part from good video file to broken video file and save it!Done!!
Note : Always have a backup of video file.
follow these links for detailed informations :
http://janit.iki.fi/repair-corrupted-mp4-video/
https://www.lfs.net/forum/thread/45156-Repair-a-corrupt-mp4-file%3F
http://hackaday.com/2015/04/02/manual-data-recovery-with-a-hex-editor/
http://www.hexview.org/hex-repair-corrupt-file.html
Third, Even tough MP4 file has many advantages , this kind of error is unpredictable and difficult to handle it.
Thus , Using format such as MPG and AV_CODEC_ID_MPEG1VIDEO/AV_CODEC_ID_MPEG2VIDEO (FFMPEG) may help to avoid this kind of error. The mentioned MPG format does not require any header/trailer.if there is any sudden power failure MPG file can play the file whatever frames are stored so far.
Note : there are other formats and codec also available with this kind of properties.
As you may know, when you record a video on a windows phone, it is saved as a .mp4. I want to be able to access the video file (even if it's only stored in isolated storage for the app), and manipulate the pixel values for each frame.
I can't find anything that allows me to load a .mp4 into an app, then access the frames. I want to be able to save the manipulated video as .mp4 file as well, or be able to share it.
Has anyone figured out a good set of steps to do this?
My guess was to first load the .mp4 file into a Stream object. From here I don't know what exactly I can do, but I want to get it into a form where I can iterate through the frames, manipulate the pixels, then create a .mp4 with the audio again once the manipulation is completed.
I tried doing the exact same thing once. Unfortunately, there are no publicly available libraries that will help you with this. You will have to write your own code to do this.
The way to go about this would be to first read up on the storage format of mp4 and figure out how the frames are stored there. You can then read the mp4, extract the frames, modify them and stitch them back in the original format.
My biggest concern is that the hardware might not be powerful enough to accomplish this in a sufficiently small amount of time.