I need to get infos from the raw h264 track of a mkv file.
Some times ago, I used to extract the h264 raw stream, and analyze it by itself.
now, I would like to limit the disk usage, avoiding the extract process, so there are 2 choices:
use ffmpeg to pipe h264 to mediainfo
use a sort of ramdisk
I tried
ffmpeg -i original.mkv -map 0:v:0 -c copy -bsf:v h264_mp4toannexb -f h264 - | mediainfo -
but it returns none
where am I wrong?
mediainfo does not (yet) support pipes (-). You may want to add a feature request on MediaInfo tracker.
but... I don't see which kind of better metadata report you get with that, compared to 'mediainfo original.mkv', as MediaInfo supports parsing of H264 in MKV.
Related
I have a raw h264 file that I can display with VLC, on a mac:
open -a VLC file.h264
I can convert this to mp4 with the command line
ffmpeg -f h264 -i file.h264 -c:v copy file.mp4
But what I really want to do is something like:
cat file.h264 | ffmpeg > file.mp4
Reason being that the input is coming over a socket and I want to convert it and send it to a video tag in an html file on the fly.
An alternative solution would be a way to display the raw h264 in a web page without converting it to mp4 first.
The input is coming in frame by frame, the first four bytes are 0,0,0,1. My understanding is that this h264 Annex B format.
I know nothing about video formats, I would grateful to be pointed in a direction to look.
Should I look into writing code using libavcodec like this quuesion or is there an off-the-shelf solution?
H.264 muxed to MP4 using libavformat not playing back
Thanks!
The command line below will create a fragmented MP4 (Windows cmd)
type test.h264 | ffmpeg -i - -vcodec copy -f mp4 -movflags frag_keyframe+empty_moov pipe:1 > test_frag.mp4
You should be able to find lot's of JavaScript code to play fragmented MP4s.
For example: https://github.com/chriswiggins/videojs-fmp4
I've been looking all over the web & StackOverflow, and can't get this to work. I have an audio file that I'd like to split into mp3 files and generate a corresponding m3u8 file.
I've tried this, which was the closest:
ffmpeg -i sometrack.wav -c:a libmp3lame -b:a 256k -map 0:0 -f segment -segment_time 10 -segment_list outputlist.m3u8 -segment_format mpegts 'output%03d.mp3'
But all the mp3 files are garbled when I play them.
There are two issues here. FFmpeg normally looks at the extension of the output files to determine output container. However, when the output format is forced -segment_format for segment muxer or just -f format for most others, ffmpeg will pay heed to that and no longer look at the extension. In this case, segment_format is set to mpegts so that's what the output files will be. To ensure valid mp3 files, set segment_format to mp3.
The second issue is that since the extension is mp3, my guess is that hls.js is not able to correctly determine the format of the segments, or it assumes a wrong format and tries to parse them that way. Either way, there should be some messages in the browser console to that effect. See https://github.com/video-dev/hls.js/pull/1190 for issues that hls.js has had with format probing.
I don't expect this to happen often, but while re-encoding video files via batch file to h265 I'm checking to make sure the audio is in aac. If it isn't then I want to convert to aac, but keep the bit rate at what ever the old file uses since if I just convert to aac ffmpeg is going to use the default 128kbps value. For any old videos I have the bit rate is probably going to be lower than that so upconverting is going to increase the file size a little.
Is there any way to convert to aac but keep the old bit rate?
Here's what I was trying but it keeps converting the old mp3 89kbps stream to aac 128 kbps:
ffmpeg -i test.mp4 -acodec aac -vcodec copy test.aac.mp4
Note that above is just for test purposes, I am actually converting the video.
Note 2: My question isn't at all similar to the other question that it has been suggested as similar to. I have no trouble storing ffprobe results in variables nor did I even mention that.
You could detect the bitrate of the audio stream from your input file using ffprobe, and then depending on the output from that command run the appropriate FFMPEG command.
Here's a small bash script that will detect the bitrate on the audio stream and if it is less than 128Kbps just use that original bitrate during conversion. This should avoid up sampling:
#!/usr/bin/env bash
AUDIO_BITRATE=`ffprobe -v error -select_streams a:0 -show_entries stream=bit_rate -of default=noprint_wrappers=1:nokey=1 $1`
if [[ $AUDIO_BITRATE < 128000 ]]; then
ffmpeg -i $1 -acodec aac -ab ${AUDIO_BITRATE}k -vcodec copy new-$1
else
ffmpeg -i $1 -acodec aac -vcodec copy new-$1
fi
Alternatively if you need to convert into other video formats and don't have FFMPEG installed you could use a commercial conversion API such as Zamzar.
I am using ffmpeg to switch container from mkv to mp4 via this command:
ffmpeg -i filename.mkv -vcodec copy 1.mp4
this is the simplest command that I found when converting from mkv container to mp4 without re-encoding. The output stated otherwise (if I am not mistaken)
This is a small screen shot of the the output:
Where it said Stream Mapping, #0:0 (264 (native)) -> 264 (libx264)). Does this mean that it's re-encoding from x264 to libx264? What Did I do wrong?
Any help is appreciated...
problem solved, specify the audio codec solve my problem...
ffmpeg -i filename.mkv -vcodec copy -acodec copy 1.mp4
Remuxing containers, e.g. MKV or AVI to MP4, with FFmpeg will only keep a single – it tries to choose the best one available – audio and video stream from the input file in the output file. This can be avoided by providing -map 0.
Matroska files frequently contain subtitles in a format not supported in MOV/MPEG containers by FFmpeg, esp. SRT/Subrip or ASS/SSA. They can either simply be dropped with -sn or be converted to a native format like mov_text. (You could also burn hard subtitles into a video stream with filters.)
Sometimes, adding missing information by using heuristics might help. This is activated with -find_stream_info, but I am not sure whether this should be used by default.
I shall assume that built configuration is not important to know (-hide_banner) and only serious problems should be logged to the console (-loglevel warning, alternatively: quiet | panic | fatal | error | warning | info (default) | verbose | debug | trace).
Therefore, a rather universal conversion command looks like this:
$ ffmpeg -find_stream_info -i input.mkv \
-map 0 -codec copy -codec:s mov_text output.mp4 \
-hide_banner -loglevel warning; \
rm input.mkv
For batch processing multiple files on a Windows box within cmd and overwriting existing files (-y), use for:
FOR /r %F IN (*.mkv) DO (#ffmpeg \
-find_stream_info -i "%F" \
-map 0 -codec copy -codec:s mov_text "%~pnF.mp4" \
-hide_banner -loglevel warning -y)
ffmpeg.exe -i input_file_name.mkv output_file_name.mp4
It converts to mp4 but it making a bigger size. :)
For those who use Windows and want to convert a directory of MKV files, throw this batch file in the same directory and execute it:
#ECHO OFF
FOR %%F IN (*.mkv) DO (
ffmpeg -i %%~nF.mkv -acodec copy -vcodec copy %%~nF.mp4
)
Some context in case anyone else is facing the same:
I'd previously recorded my desktop using OBS; the mkv file wasn't accepted by Sony Vegas. I ran the above batch which called ffmpeg on each of the captures and the resultant MP4s were accepted by Sony Vegas.
Is ffmpeg metadata, which is also described in:
http://wiki.multimedia.cx/index.php?title=FFmpeg_Metadata
also supported MISB standard UAV metadata 601.5 ?
Is it same as KLV ?
Thanks,
Ran
FFMPEG does not natively support MISB KLV metadata or have demuxers or decoders for KLV metadata of these types at this time.
However, FFMPEG can be used to extract data elementary streams from containers like MPEG Transport Stream (TS) per ISO 13818-1. Such capability works for UDP streams and local MPEG TS Files. See the examples at end of response. The examples simply extract the data from the stream, they do not parse them. Parsing could easily be accomplished in real time by piping the output or post processing using many languages including C and Python.
It would be helpful to know specifically which containers you are trying to extract data from. In lieu of such information I have assumed MPEG TS in my response and examples. I would like to also point out that the current standard for "UAS Local Dataset" is now ST0601.8 at the time of this response.
I have personally tested the following examples with FFMPEG 2.5.4 on Mac OS X 10.9.5.
The following examples can be modified such that the output is sent to stdout by replacing the <outfile> with '-'.
Extract Data Stream From MPEG-TS File at Line Speed and Save to Binary File:
ffmpeg -i <MPEGTS_infile> -map d -codec copy -f data <binary_outfile>
Extract Data Stream From MPEG-TS File at Frame Rate and Save to Binary File:
ffmpeg -re -i <MPEGTS_infile> -map d -codec copy -f data <binary_outfile>
Extract Data Stream From MPEG-TS UDP Stream at Stream Rate and Save to Binary File:
ffmpeg -i udp://#<address:port> -map d -codec copy -f data <binary_outfile>
Extract Data Stream From MPEG-TS UDP Stream at Stream Rate and Direct to STDOUT:
ffmpeg -i udp://#<address:port> -map d -codec copy -f data -
Stream Video, Audio and Data Streams from MPEG-TS file Over UDP at Frame Rate:
ffmpeg -re -i <MPEGTS_infile> -map 0 -c copy -f mpegts udp://<address:port>
I'm unsure if UAV metadata 601.5 is the same as KLV, but FFmpeg can demux KLV metadata since commit 69a042e from 28 Oct 2013:
mpegts: demux synchronous SMPTE 336M Key-Length-Value (KLV) metadata
This fixes ticket #2579: Data stream from UAV video reported as "Unknown" type and without codec_id set, so you may find other relevant information there too.