I am using libav to decode aac audio and then transcode it into mp3 using libmp3lame.
I know when I decode aac stream I get AV_SAMPLE_FMT_FLTP output and mp3 encoder needs the input in AV_SAMPLE_FMT_S16P.So I am doing a sample format conversion using swr_convert of libswresample. I know the no. of samples in the decode output is different (i.e. 1024) from the one required by libmp3lame (i.e. 1152) ? for that I am doing buffering as well.
But it is not crashing because of the buffering, it doesn't even come to that part, it crashes in swr_convert.
and if I see the stacktrace using gdb I see the crash happening somewhere in
ff_float_to_int16.next()
What could be the possible problem?
Related
The issue: I need to convert an h.264 stream streamed over RTP into MJPEG, but for very convoluted reasons I am required to use the libjpeg-turbo library, not the mjpeg encoder that comes with ffmpeg. So the only thing FFMPEG needs to do is convert the h.264 RTP stream to rawvideo in RGBA and output to a socket where I then manually do the transcoding.
However, libjpeg-turbo only expects complete frames, meaning I need to collect rawvideo packet fragments and somehow synchronize them. Putting incoming raw video fragments into a buffer as they come results in heavily broken images.
Is there some way of saving the header information of the initial h.264 RTP packets? The command I'm currently using is very straightforward:
-i rtsp://: -vcodec rawvideo -f rawvideo udp://:
My task is to create html5 compatible video from input video (.avi, .mov, .mp4, etc.). My understanding is that my output should be .webm or .mp4 (H264 video, aac audio).
I use ffmpeg for conversion and it takes a lot of time. I wonder if I could use ffprobe to test if input video is "H264" and "aac" and if so then maybe I could just copy video/audio into output without modifications.
I.e. I have next idea:
Get input video info using ffprobe:
ffprobe {input} -v quiet -show_entries stream=codec_name,codec_type -print_format json
The result would be JSON like this:
"streams": [
{codec_name="mjpeg",codec_type="video"},
{codec_name="aac",codec_type="audio"}
]
If JSON tells that video codec is h264 then I think I could just copy video stream. If JSON tells that audio codec is h264 aac then I think I could just copy audio stream.
JSON above tells that my audio is "aac". So I think I could just copy audio stream into ouput video but still needs video stream conversion. For the above example my ffmpeg command would be like:
ffmpeg -i {input} -c:v libx264 -c:a copy ouput.mp4
The question is if I could always use this idea to produce html5 compatible video and if this method will actually speed up video conversion.
The question is if I could always use this idea to produce html5 compatible video
Probably, but some caveats:
Your output may use H.264 High profile, but your target device may not support that (but that is not too likely now).
Ensure that the pixel format is yuv420p. If it is not then it may not play and you will have to re-encode with -vf format=yuv420p. You can check with pix_fmt in your -show_entries stream.
If the file is directly from a video camera, or some other device with inefficient encoding, then the file size may be excessively large for your viewer.
Add -movflags +faststart to your command so the video can begin playback before the file is completely downloaded.
and if this method will actually speed up video conversion.
Yes, because you're only stream copying (re-muxing) which is fast, and not re-encoding some/all streams which is slow.
I read what an Elementary Stream is on Wikipedia. A tool i am using "Live555" is demanding "H.264 Video Elementary Stream File". So when exporting a Video from a Video Application, do i have to choose specific preferences to generate a "Elementery Stream" ?
If you're using ffmpeg you could use something similar to the following:
ffmpeg -f video4linux2 -s 320x240 -i /dev/video0 -vcodec libx264 -f h264 test.264
You'll have to adapt the command line for the file type you're exporting the video from.
This generates a file containing H.264 access units where each access unit consists of one or more NAL units with each NAL unit prefixed with a start code (0001 or 001). You can open the file using a hex editor to take a look at it.
You can also create an H.264 elementary stream file (.264) by using the the H.264 reference encoder from raw YUV input files.
If you copy the generated .264 file into the live555 testOnDemandRTSPServer directory, you can test streaming the file over RTSP/RTP.
Can you give some references to read more about NAL / H.264 elementary Stream. How can I quickly check if the stream is an elementary stream?
Generally anything in a container (avi or mp4) is not an elementary stream. The typical extension used for elementary streams is ".264". The quickest way to double check that a file is an elementary stream is to open the file in a hex editor and look for a start code at the beginning of the file (00000001). Note that there should be 3 (000001) and 4 (00000001) byte start codes through out the file (before every NAL unit)
Why does live555 not play h264 streams which are not elementary?
This is purely if live555 has not implemented the required demux (e.g. avi or mp4). AFAIK live555 does support demuxing H.264 from the matroska container.
Anyone knows a good way to use http live streaming tools on non-Mac platforms?
Can you tell me at least if there's good alternatives? I need mediafilesegmenter and mediastreamvalidator.
Or maybe anyone has a source code or something like that...
UPD: I've tried different segmenters, most of them are based on Carson's open-sourced segmenter. Now the difference between Apple's mediafilesegmenter and this one, that it takes only a transport stream, not just any video. And I need to segment h264 videos.
When I use ffmpeg to convert h26s to mpeg-ts I'm getting much bigger files in the end. Even if I try to preserve same audio codec (aac) it changes video codec form avc to mpeg-ts.
Damn I hate Apple. How can they propose that thing as a standard if they don't even provide workarounds for another platforms.
I still need to find a way to segment h264 videos, and keep in the segmented files avc and aac codecs.
If you're not specifying the video codec, and specifying an mpeg2 transport stream container, FFmpeg will default to MPEG2 video coding. If you already have MPEG4-AVC(h.264) encoded video and AAC audio, then you instruct FFmpeg to not re-encode the video and audio with these options: -vcodec copy -acodec copy
Your final command should be something like this:
ffmpeg -i inputfile -vcodec copy -acodec copy -f mpegts outputfile.ts
Then you can use one of the segmenter tools for segmenting and building the playlist. It's worth mentioning that new versions of FFmpeg support segmenting, but you still would need a program to create the playlist file.
i just have h.264 encoded video stream and i want to make mp4 file.
/* find output format for mp4 */
m_pOutputFormat= av_guess_format("mp4", NULL, NULL);
if (!m_pOutputFormat) return FALSE; // could not find suitable output format(mp4).
on this code, i get mpeg for video codec not h264, i think that's because i build ffmpeg without libx264. (and i dont want to build ffmpeg with libx264 for license)
m_pOutputFormat->video_codec= CODEC_ID_H264;
when i change its video_codec to CODEC_ID_H264, it works fine some player(kmplayer). but it's not work on ipod, QuickTime.
this code maybe wrong because it could not change codec_tag value(this variable has const property).
1. how can i get other result for av_guess_format("mp4",NULL,NULL) without recompile libav+libx264?
2. how can i make mp4 file properly?
I think you don't need libx264 to properly work with h264 stream because libx264 is encoder, and I've worked with h264 packets using libav compiled without libx264. It works fine for reading packets from file and writing it to another file.
As for call to av_guess_format you can either provide appropriate MIME type for h264 (video/h264 for example or any other) or just give function another short type name. This must be correct: av_guess_format("h264",NULL, NULL).
Look here for some code example (objective-C): http://libav-users.943685.n4.nabble.com/Libavcodec-encoding-with-libx264-td3250373.html