How to convert video to mp4 [javacv] - javacv

I would like to convert the video file to .mp4. As it works for containers like .avi or .flv, there is a problem with .mkv. Audio and video are out of sync for that format. I've seen that conversation on github and did exactly what was advised. The result was actaully the same what user The-Crocop had. It ends with Try to use different formats and codecs.. I have a sample video in .mkv container (size 1mb). The information (among others) I got about that file from MediaInfo are:
Codec ID: V_MPEG4/ISO/ASP
Codec ID/Info: Advanced Simple Profile
Color space: YUV
Chroma subsampling: 4:2:0
That is the reason for my setting:
FFmpegFrameGrabber grabber = new FFmpegFrameGrabber(input);
grabber.start();
FrameRecorder recorder = new FFmpegFrameRecorder(output, grabber.getImageWidth(), grabber.getImageHeight(), grabber.getAudioChannels());
recorder.setVideoCodecName("h262");
recorder.setFormat("mp4");
recorder.setPixelFormat(avutil.AV_PIX_FMT_YUV420P);
recorder.setVideoQuality(0);
recorder.setAudioQuality(0);
recorder.start();
Frame frame;
while ((frame = grabber.grabFrame()) != null) {
recorder.setTimestamp(grabber.getTimestamp());
recorder.record(frame);
}
recorder.stop();
grabber.stop();
MediaInfo says that codec used in this container is MPEG part 2 and in the ffmpeg documentation I can find it as supported under MPEG-4 part 2, known as h262. I am not sure about passing h262 as name as I can't find a AVCodecID for that.
What might be the reason the video and audio are out of sync in my case ?

Related

How to stream WEBM Video by Media Source Extensions API

I'm developing video streaming website using MSE.
Each video converted to FragmentedMP4 (h264,aac => avc1,mp4a)
It is working very fine but what if I wanted to use webm format? like YouTube or Facebook they sometimes use it.
I want to know how to get index (like sidx atom in fmp4) from VP8, VP9 or vorbis codec
I use bento4 and ffmpeg to get metadata from video and audio
but bento4 is for MP4 Just, and use MP4BoxJS to parse index in browser by JavaScript.
What should I use? ffmpeg or what to create fragmented webm or something like that and get index stream info to append segments to MSE SourceBuffer and sure it should be seekable stream..

Libavformat- Passing an object of images to libavformat to generate a video

I am trying to generate a video with libavformat/Libavcodec with a bunch of images that are in memory.
Can someone point me in the right direction, please?
Thanks in advance.
First, the basics of creating a video from images with FFmpeg is explained here.
If you simply want to change/force the format and codec of your video, here is a good start.
For the raw FFmpeg documentation you could use the Video and Audio Format Conversion, the Codec Documentation, the Format Documentation the and the image2 demuxer documentation (this demuxer will manage images as an input).
If you just want to take images and make a simple video out of it, just look at the 2 first links. FFmpeg's documentation gives you powerful tools but don't use them if you don't need them.
A sample command to create a video from images is:
ffmpeg -i image-%03d.png video.mp4
This will take all the files in sequence from image-000.png to the highest number available and make a video out of it.
You can force the format with the extension of the output file. To force the video codec use -c:v followed by a codec name available in the codec documentation.

How can I programmatically delete frames from an mpg video and keep the audio in sync?

I stream training videos from work, but don't have a great connection and get a lot of buffering. I have captured the streamed video from the PC screen into an mpg file. Fortunately when the video buffers, it shows a characteristic buffering icon in the center of the screen and there is no sound. Using ffmpeg, I have been able to write a c++ method that can step through the video frames of the mpeg file, convert to an RBG frame and detect the presence or absence of this characteristic buffering icon.
The final thing I need to do is generate a new mpeg file with only the frames that do not have this buffering image, and keep all the audio in sync. How do I do that with ffmpeg?
I have already found the dts and pts timestamps on the video and audio frames, but don't know how to use this information to recode just the frames that don't have the buffering image. The recode should keep all the properties of the original (framerate, resolution, size etc)
Here is the stripped down code I use to traverse the frame and detect which ones I want to keep (omitting a lot of initialization and error checking)
while (av_read_frame(pFormatCtx, &packet) >= 0)
{
// Is this a packet from the video stream?
if (packet.stream_index == videoStream)
{
// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
// Did we get a video frame?
if (frameFinished)
{
// Convert the image from its native format to RGB
sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);
if (ThisIsAFrameIWant(pFrameRGB))
{
WRITE FRAME TO NEW MPEG KEEPING AUDIO IN SYNC
}
}
}
}
This will require some expertise.
1st if you not already did you should create new output context with video and audio stream (i.e. mpeg2+mp2).
My experience says in case of problematic streams (as you have) you can forget pts/dts thing (some other dudes out there who knows better be my guest..). Instead you may keep video and audio in separate buffers (the incoming packets after av_read_frame()). For this look for ffplay it has videoqueue and audioqueue in some structure, use it.
For the audio you need to some processing before queuing. Trick is audio frames (or packets you obtained with av_read_frame()) if not equal to audio samples per frame you should do it manually with additional buffering. For example 16 bit 48000hz pcm16 audio for 60hz video should have 48000/60 = 800 audio samples per frame. If you can manage this. It will be easy to keep a/v in sync, just make sure audio and video has same number of packets in buffer. So if you drop a video packet, do same for audio.
2nd do not use RGB for any known video algorithm (mpeg2, h264, hevc) to encode video packets. Use yuv420p. This will same some trouble.
Hope that helps.

Can I get pictures/stills/photos from inside a container file from a CD-I disc?

I have ffmpeg setup.
Is there a way to extract pictures/stills/photos (etc) from a container (file) that's from an old CD-I game that I have.
I don't want to extract the audio nor video. And I don't want frames from the videos either.
I want the bitmaps (etc) from INSIDE that container file.
I know my Windows 8.1 PC can't read inside that container file - so I'm hoping there's a way to extract all the files (I want) instead using ffmpeg.
(IsoBuster only gives the audio and video so I know already about IsoBuster.)
I think there are no individual headers for the pictures/stills/photos, etc.
Here's what ExifTool decoded the file as:
ExifTool Version Number (10.68)
File Name (green.3t)
File Size (610 MB)
File Permissions (rw-rw-rw-)
File Type (MPEG)
File Type Extension (mpg)
MIME Type (video/mpeg)
MPEG Audio Version (1)
Audio Layer (2)
Audio Bitrate (80 kbps)
Sample Rate (44100)
Channel Mode (Single Channel)
Mode Extension (Bands 4-31)
Copyright Flag (False)
Original Media (False)
Emphasis (None)
Image Width (368)
Image Height (272)
Aspect Ratio (1.0695)
Frame Rate (25 fps)
Video Bitrate (1.29 Mbps)
Duration (1:02:12 approx)
Image Size (368x272)
Megapixels (0.100)
Thank you for reading and - help!! :D

Where is the documentation for the Mjpeg codec used in mencoder, VLC and FFMpeg?

Mencoder has a lovely option for converting a mjpeg file into an avi file with an 'MJPG' codec that plays in VLC.
The command line to do this is:
mencoder filename.mjpeg -oac copy -ovc copy -o outputfile.avi -speed 0.3
where 0.3 is the ratio of the desired play framerate to the default 25 fps. All this does is make a copy of the mjpeg file, put an avi header on top and at the end, what seems to be an index of the frame positions in the file.
I want to replicate this in my own code, but I can't find documentation anywhere. What is the exact format of the index section? The header has extra filler bytes in it for some reason - whats this about?
Anyone know where I can find documentation? Both mencoder and vlc seem to have this codec built in.
After much work, study and fiddling around with HxD and RiffPad, I finally figured it out. It would take a long blog entry to explain it all, but basically there isn't really an 'MJPG' codec out there - mjpg just uses a few tricks and unusual parts of the avi standard to produce an indexed file.
The key is to place '00dc' and an Int32 length tag 8 bytes in front of each Jpeg open tag. If you want the avi to be random access, then you need an index at the end which points to each of the '00dc' tag positions.
VLC will play this natively. If you have ffmpeg installed, then Windows Media Player uses that to decode these types of mjpg files.

Resources