Decoding with a modified ffprobe one can get:
ffprobe -show_packets -show_frames -of json <rtspstream of ip camera>
AvPacket:
"side_data_list": [
{
"side_data_type": "Producer Reference Time"
"wallclock": 213414,
"flags": 24
}
I dug for about 10hours inside GStreamer but coulndn't decode this data, let alone make it available for GStreamer API downstream. Would love any tips.
I've compiled GStreamer from source on Ubuntu, ran it with logging on 1000, then crossreferenced function calls visible there to find my way around, tried various sections depay, demux, decode...
Related
I have five webcams I want to live to stream their content to m3u8(HLS stream), so I can use an HTML web player to play that file.
My current setup:
I have five systems each has a webcam connected to it, so I am using RTSP to stream data from the system to AWS.
./ffmpeg -f avfoundation -s 640x480 -r 30 -i "0" -f rtsp rtsp://awsurl.com:10000/cam1
./ffmpeg -f avfoundation -s 640x480 -r 30 -i "0" -f rtsp rtsp://awsurl.com:10000/cam2
....
./ffmpeg -f avfoundation -s 640x480 -r 30 -i "0" -f rtsp rtsp://awsurl.com:10000/cam5
On the cloud, I want to set up a server. I Googled and learned about GStreamer, with which I can set up an RTSP server. The command below has an error. (I can't figure out how to set up one server for multiple webcam streams)
gst-launch-1.0 udpsrc port=10000 ! rtph264depay ! h264parse ! video/x-h264,stream-format=avc ! \
mpegtsmux ! hlssink target-duration=2 location="output_%05d.ts"\
playlist-root=http://localhost:8080/hls/stream/ playlists-max=3
I question how I can set up the RTSP to differentiate between multiple webcam streams using one server (or do I have to create a server for each webcam stream)?
This might not be a canonical answer, as there are no details about the camera streams, the OS and your programming language, but you may try the following:
1. Install prerequisites
You would need gstrtspserver library (and may be gstreamer dev packages as well if you want to try from C++).
Assuming a Linux Ubuntu host, you would use:
sudo apt-get install libgstrtspserver-1.0 libgstreamer1.0-dev
2. Get information about the received streams
You may use various tools for that, with gstreamer you may use:
gst-discoverer-1.0 rtsp://awsurl.com:10000/cam1
For example, if you see:
Topology:
unknown: application/x-rtp
video: H.264 (Constrained Baseline Profile)
Then it is H264 encoded video sent by RTP so RTPH264.
You would get more details adding verbose flag (-v).
If you want your RTSP server to stream with H264 encoding and the incoming stream is also H264, then you would just forward.
If the received stream has a different encoding than what you want to encode, then you would have to decode video and re-encode it.
3. Run the server:
This python script would run a RTSP server, streaming 2 cams with H264 encoding (expanding to 5 should be straight forward).
Assuming here that the first cam is H264 encoded, it is just forwarding. For the second camera, the stream is decoded and re-encoded into H264 video.
In latter case, it is difficult to give a canonical answer, because the decoder and encoder plugins would depend on your platform. Some also use special memory space (NVMM for Nvidia, 3d11 for Windows, ...), in such case you may have to copy to system memory for encoding with x264enc, or better use an other encoder using same memory space as input.
import gi
gi.require_version('Gst','1.0')
gi.require_version('GstVideo','1.0')
gi.require_version('GstRtspServer','1.0')
from gi.repository import GObject, GLib, Gst, GstVideo, GstRtspServer
Gst.init(None)
mainloop = GLib.MainLoop()
server = GstRtspServer.RTSPServer()
mounts = server.get_mount_points()
factory1 = GstRtspServer.RTSPMediaFactory()
factory1.set_launch('( rtspsrc location=rtsp://awsurl.com:10000/cam1 latency=500 ! rtph264depay ! h264parse ! rtph264pay name=pay0 pt=96 )')
mounts.add_factory("/cam1", factory1)
factory2 = GstRtspServer.RTSPMediaFactory()
factory2.set_launch('( uridecodebin uri=rtsp://awsurl.com:10000/cam2 source::latency=500 ! queue ! x264enc key-int-max=15 insert-vui=1 ! h264parse ! rtph264pay name=pay0 pt=96 )')
mounts.add_factory("/cam2", factory2)
server.attach(None)
print ("stream ready at rtsp://127.0.0.1:8554/{cam1,cam2,...}")
mainloop.run()
If you want using C++ instead of python, you would checkout sample test-launch for your gstreamer version (you can get it with gst-launch-1.0 --version) that is similar to this script and adapt.
4. Test
Note that it may take a few seconds to start before displaying.
gst-play-1.0 rtsp://[Your AWS IP]:8554/cam1
gst-play-1.0 rtsp://[Your AWS IP]:8554/cam2
I have no experience with AWS, be sure that no firewall blocks UDP/8554.
rtsp-simple-server might be a good choice for presenting and broadcasting live streams through various format/protocols such as HLS over HTTP.
Even on meager and old configuration, it still has provided me a decent latency.
If you look for reduced latency, you might be surprised with cam2ip. Unfortunatly this isn't HLS, it's actually mjpeg, and thus without sound, but with far better latency.
I have code which decode live h264 camera stream and dispay. I have use ffmpeg dxvae decoder.
Problems :
avcodec_send_packet return negative error code.
What I have Tried :
I have dump stream packet and save in h264 file. then ffmpeg.exe
-hwaccel dxva2 -threads 1 -i output.h264 -f null - -benchmark command verify and it throws error
Failed setup for format dxva2_vld: hwaccel initialisation returned error.
I have found h264 file has baseline profile. does baseline profile
not supported by dxva2 decoder?
I am able to play file with vlc player.
Also I had decode high profile h264 video using above command and its works fine.
please help to fix this. thanks in advance.
It depends on your GPU hardware capabalities. For example, here is NVidia capabilities (from june 2016 codec sdk) :
Also, for NVidia, if you check this link Nvidia PureVideo , some widths can't be decode :
Note that all Feature Set B hardware cannot decode H.264 for the following widths: 769-784, 849-864, 929-944, 1009-1024, 1793-1808, 1873-1888, 1953-1968, 2033-2048 pixels.
I am playing the media file on RTSP by fetching the streams directly from some server. I am getting DTS discontinuity in stream error. I have tried with both FFMPEG and FFPLAY.
FFMPEG
I am using the following ffmpeg command:
ffmpeg -i rtsp://media:123456#10.10.167.20/41415308b3839f2 -f wav test.wav
As an output of this command, I am getting the following error:
FFPLAY
I am using the following ffplay command:
ffplay rtsp://media:123456#10.10.167.20/41415308b3839f2
As an output of this command, I am getting the following error:
Can anyone please tell me that when this error usually occurs? Is there any reason behind this and any workaround for this?
From the libavformat/utils.c, avformat_find_stream_info function:
/* Check for a discontinuity in dts. If the difference in dts
* is more than 1000 times the average packet duration in the
* sequence, we treat it as a discontinuity. */
Also note, that RTP does not define any mechanisms for recovering for packet loss.
So, if you lose packets in such manner that the dts difference between two read packets is more than 1000 times the average packets duration you get foregoing warning.
I am trying to set up an rtsp stream that can be accessed from an application. I have been experimenting with ffmpeg to realize that. I have succeded as far as I was able to stream from ffmpeg to ffplay but I could not load the stream in vlc for example. Here are the calls that I did from two different shells on the same machine:
ffmpeg.exe -y -loop 1 -r 24 -i test_1.jpg -vcodec libx264 -tune stillimage -f rtsp rtsp://127.0.0.1:1234/stream.sdp
ffplay.exe -rtsp_flags listen rtsp://127.0.0.1:1234/stream.sdp
Can anybody explain to me what I would have to do to load the stream as a network stream using vlc? Any help is appreciated.
I have done this before and I'm not sure what was wrong with rtsp output of ffmpeg. But what i can say right now is please consider using Live555 library if you have any streaming scenario. cause the ffmpeg code (for rtp muxer) is not good and it is buggy. ffmpeg has another solution for streaming server which is called ffserver which prepare ffmpeg pipe for vlc or another third-party application. and that's bad written and buggy too (libav group -another fork of libav* libraries) never used ffserver code and in not sure if they have any plan to consider ffserver as their solution. they have ffplay(avplay), ffmpeg(avconv) and ffprobe but not ffserver.
If you want to use Live555 which really easy, you have to just go to their website (www.live555.com) download the source code and build MediaServer application (It is in 'MediaServer' folder). if you read the code's documentation, I'm sure you will have not any problem.It's a basic rtsp server to stream any (supported) accessible file on your HDD via rtsp url of your server.
if you have any problem with code just comment here, so I can help you more with live555.
Recently I had a task to use ffmpeg as a transcoding as well a streaming tool. The task was to convert the file from a given format to MP4 and immediately stream it, by capturing it from stdout. So far so good. The streaming works well with the native player of android tabs as well as the VLC player. The issue is with the flash player. It gives the following error:
NetStream.Play.FileStructureInvalid : Adobe Flash cannot import files that have invalid file structures.
ffmpeg flags used are
$ ffmpeg -loglevel quiet -i somefile.avi -vbsf h264_mp4toannexb -vcodec libx264 \
-acodec aac -f MP4 -movflags frag_keyframe+empty_moov -re - 2>&1
As noted in the docs for -movflags
The mov/mp4/ismv muxer supports fragmentation. Normally, a MOV/MP4 file has all the metadata about all packets stored in one location (written at the end of the file, it can be moved to the start for better playback using the qt-faststart tool). A fragmented file consists of a number of fragments, where packets and metadata about these packets are stored together. Writing a fragmented file has the advantage that the file is decodable even if the writing is interrupted (while a normal MOV/MP4 is undecodable if it is not properly finished), and it requires less memory when writing very long files (since writing normal MOV/MP4 files stores info about every single packet in memory until the file is closed). The downside is that it is less compatible with other applications.
Either switch to a flash player that can handle fragmented MP4 files, or use a different container format that supports streaming better.
Also, -re is an input-only option, so it would make more sense to specify it before the input, instead of before the output.