I am developing an application that listens for RTP data using GStreamer and converts the received raw data packets using FFMPEG.
I works well for codes: PCMA, PCMU, G722 and G729 and converts the received audio files into wav and mp3. But the conversion fails when G729b packet is received. FFMPEG has support for G729 as mentioned here but nothing is mentioned about G729a/b. GStreamer also has not mentioned anything about G729a/b though has a decoder for G729. I also didn't find any other library that can convert G729b to wav or mp3.
Can anyone please suggest any way or any library to convert G729a/b to wav.
Thanks in advance!
Command used:
ffmpeg -f g729 -i .g729 -acodec pcm_s16le -ar 8000 output.wav
Please find the two files here
Related
I am using ffmpeg to create an hls stream. The source is an mkv with multiple audio tracks. I have tried using -map to specify the audio stream as well. I also found that when I point ffmpeg to any other audio stream in the file it works. It's just the first audio stream that does not. At one point I replaced -c copy with -acodec aac -ac 6 on the first stream and I got sound which is great but I am only looking to copy the stream and not re-encode it. The next thing I tried was using other mkv videos I have. All are reflecting the same issue. The mkv's by itself play both audio and video fine in VLC. When playing the output.m3u8 in VLC the option to choose different audio tracks is greyed out. Here is the command I'm using:
ffmpeg -i "./video.mkv" -ss 00:00:00 -t 00:00:30 -c copy -f hls "output.m3u8"
I want the audio of my hls stream to reflect that of the mkv source:
Although what I get returned from the command above gives me no sound and shows me this in mediaInfo:
I've aslo noticed that hls does not support pcm. Is it possible dash could work with this stream because it is pcm?
HLS segments can be either MPEG-TS or fragmented MP4. Neither officially support PCM audio, so you'll have to convert it.
DASH uses fragmented MP4 as segment format.
I'm building an FFMPEG stream in iOS that should convert any file type to wav and send it to an http stream
ffmpeg -i "/path/to/audio/track.suffix" -vn -strict -2 -acodec pcm_u8 -f wav -listen 1 -seekable 1 http://localhost:8090/restream.wav
I posted a question to the ffmpeg user list and someone said:
You are not sending valid wav files like this.
Can anyone help me to see what's wrong with this ffmpeg cmd?
Thanks!
I cant see why this command should send invalid wav files. I thik you can safely use this. But why dont you just try it out ?
My goal is transcode this file with ffmpeg.
https://drive.google.com/open?id=1ATuPtSbZeQLexB1HBP509hInDOTyfEV8
ffplay fails to analize or play this file and returns:
Invalid pixel format.
This is the simply command:
ffplay -i testproxy.mxf
ffprobe -i testproxy.mxf -show_stream
It has been encoded by avid Interplay whit this targhet quality:
H.264 800Kbps Proxy 1080i 25
Maybe it's a raw file? and need same specification ahead input file?
Any suggestion is appreciated
Either Interplay doesn't write* a standard MXF or there's a limitation in ffmpeg's mxf demuxer.
But you can play the file with
ffplay -vcodec h264 testproxy.mxf
and similarly, you can transcode using
ffmpeg -vcodec h264 -i testproxy.mxf ...
*more likely, as mediainfo also fails to detect the video codec.
I'd like to capture audio streaming from a live radio on internet using ffmpeg.
If you have some examples or documentation it will be very appreciated.
Assuming the protocol is HTTP and audio format is MP3 it can be as simple as:
ffmpeg -i http://server:port -c copy output.mp3
See:
FFmpeg Protocols Documentation
ffmpeg Documentation: Stream copy
Is ffmpeg metadata, which is also described in:
http://wiki.multimedia.cx/index.php?title=FFmpeg_Metadata
also supported MISB standard UAV metadata 601.5 ?
Is it same as KLV ?
Thanks,
Ran
FFMPEG does not natively support MISB KLV metadata or have demuxers or decoders for KLV metadata of these types at this time.
However, FFMPEG can be used to extract data elementary streams from containers like MPEG Transport Stream (TS) per ISO 13818-1. Such capability works for UDP streams and local MPEG TS Files. See the examples at end of response. The examples simply extract the data from the stream, they do not parse them. Parsing could easily be accomplished in real time by piping the output or post processing using many languages including C and Python.
It would be helpful to know specifically which containers you are trying to extract data from. In lieu of such information I have assumed MPEG TS in my response and examples. I would like to also point out that the current standard for "UAS Local Dataset" is now ST0601.8 at the time of this response.
I have personally tested the following examples with FFMPEG 2.5.4 on Mac OS X 10.9.5.
The following examples can be modified such that the output is sent to stdout by replacing the <outfile> with '-'.
Extract Data Stream From MPEG-TS File at Line Speed and Save to Binary File:
ffmpeg -i <MPEGTS_infile> -map d -codec copy -f data <binary_outfile>
Extract Data Stream From MPEG-TS File at Frame Rate and Save to Binary File:
ffmpeg -re -i <MPEGTS_infile> -map d -codec copy -f data <binary_outfile>
Extract Data Stream From MPEG-TS UDP Stream at Stream Rate and Save to Binary File:
ffmpeg -i udp://#<address:port> -map d -codec copy -f data <binary_outfile>
Extract Data Stream From MPEG-TS UDP Stream at Stream Rate and Direct to STDOUT:
ffmpeg -i udp://#<address:port> -map d -codec copy -f data -
Stream Video, Audio and Data Streams from MPEG-TS file Over UDP at Frame Rate:
ffmpeg -re -i <MPEGTS_infile> -map 0 -c copy -f mpegts udp://<address:port>
I'm unsure if UAV metadata 601.5 is the same as KLV, but FFmpeg can demux KLV metadata since commit 69a042e from 28 Oct 2013:
mpegts: demux synchronous SMPTE 336M Key-Length-Value (KLV) metadata
This fixes ticket #2579: Data stream from UAV video reported as "Unknown" type and without codec_id set, so you may find other relevant information there too.