I am using libavformat library to stream a video at a network address like udp://127.0.0.1:1000. I use ffplay to display the received video stream at the network address. However, the video appears few second later (e.g. 5 6 seconds) at ffplay on the same machine. Do you know what is the reason?
More info:
I have written my own streaming application using libavformat. When I stream a 3sec 1080p video at 25fps, ffplay does not show anything. If I repeat streaming the same video once again, this time, ffplay starts displaying the previous streamed video as well as the current video. So, it looks like ffplay waits for a buffer to be filled up by some amount, and then displays the stream. But am I correct?
To do what you are describing you are (at least) encoding the video stream, sending it over a network socket and then decoding it again. If you are streaming from an already compressed source, there may even be an additional video decoding stage involved.
Depending on the video format, the compression and buffering settings, your network configuration and the hardware involved, a delay of several seconds is not out of the ordinary. People watching TV channels using their live streaming services often see such delays when comparing to the over-the-air signal, and TV stations are supposedly using professional equipment for the streaming process...
You might be able to get more specific help if you mention how you are using the libavformat library (especially if you have written your own streaming application), the codec settings and some basics about your video stream, such as its resolution and frame rate.
Related
How does live streaming, using any protocol/codec, work from end-to-end?
I have been searching google, youtube, FFMPEG documentation, OBS source code, stack overflow, but still cannot understand how live video streaming works, from videos. So I am trying to capture desktop screenshots and convert that to a live video stream that is H.264 encoded.
What I know how to do:
Capture screenshot images using Graphics.CopyFromScreen with C# on some loop
Encode the bits and save images as JPEG files
Send JPEG image in base64 one-at-a-time and write it on a named pipe server
Read image buffer from named pipe on a nodejs server
Send base64 jpeg image over socket to client to display on a web page, each frame
What I want to be able to do:
Encode, I assume chunks, images into some H.264 format for live streaming with one of the protocols (RTMP, RTSP, HLS, DASH)
Push the encoded video chunks onto a server (such as an RTMP server), continuously (I assume ever 1-2 seconds?)
Access server from a client to stream and display live video
I've tried using FFMPEG to continuously send .mp4 files onto an RTMP server but this doesn't seem to work as it closes the connection after each video. I have also looked into using ffmpeg concat lists but this just combines videos, it can't append videos read by a live stream to my understanding and probably wasn't made for that.
So my best lead is from this stackoverflow answer which suggests:
Encode in FLV container, set duration to be arbitrarily long (according to the answer, youtube used this method)
Encode the stream into RTMP, using ffmpeg or other opensource rtmp muxers
Convert stream into HLS
How is this encoding and converting done? Can this all be done with ffmpeg commands?
I created an app in Windows to live stream the content of all the screen or only a portion of it (a Windows) to Youtube.
I used this app but I still have a problem I'm not able to understand.
I use different internet connections: ADSL at home 30Mbit, or ADSL router outside ad 2.5Mbit.
In any case, after starting ffmpeg to live stream the fps strats growing from 300 to 2000 the transmission is perfect for some minutes, then the fps slowdown until a very low value for the bitrate of the Youtube streaming. The image is no more clear and disappears, the audio is still working. The CPU is still under the 35-40% of usage.
ffmpeg must be restarted to get another 5-7 minutes of good transmission.
I tryed changing the ffmpeg command line but nothing seams to influence this behaviour.
This is because I still don't undestand Where the problem is. Any suggestions?
A log of a single session (aprox. 20 minutes) is available here http://www.mbinet.it/public/ffmpeg-20180106-094446.txt
Another (aprox. 5 min.) is available here http://www.mbinet.it/public/ffmpeg-20180106-105529.txt
Thanks
I am trying to use gstreamer for low-latency streaming purpose under windows, and what I am facing now is the audio capturing latency.
I tested audio capture latency by creating a loopback pipeline as follows:
gst-launch-1.0 directsoundsrc ! directsoundsink
However, the audio latency is higher than expected, more than ~200ms. I guess the directsoundsrc do some buffering for the audio samples, I tried to tune its parameters in a hope to get better latency (buffer-time/latency-time) but the results were still not applicable.
Does anyone knows better way to capture low-latency audio and feed it to gstreamer pipeline than the one I am using?
Or does it have some other meaning? I have searched all over the internet, and the documentation is very thin on it... If someone could point me to something that explains exactly what it is, I would appreciate it.
I am talking about this:
ffmpeg "rtmp://...... live=1" .....
tia.
Short answer is yes.
RTMP has live streaming support and vod support. 'live=1' means the rtmp is running a live streaming. In this case, the media server is receiving video feed from source in real-time. Therefore, rewind back to a previous time is not a supported action. Without 'live=1', RTMP is running on vod mode, which means the entire video pre-exist on media server, then the server is capable of rewinding back, or seek to a random position of the video.
Although technically, on client side (preferably with a software, not webpages), if you maintain buffer your self, you can rewind or pause one way or another. Since you are saving data as you are receiving from media server and everything is under your control, you will be capable of rewind and pause live streams. But you will have to implement the buffering and decoding mechanism yourself. ffmpeg command will not be able to help on this.
I am new to live555.
I want to stream my webcam from a windows 7 (64-bit) machine behind home LAN using ffmpeg as the encoder to a live555 server running on a Debian 64-bit linux machine in a data center over the WAN. I want to send a H.264 RTP/UDP stream from ffmpeg and the "testOnDemandRTSPServer" should send out RTSP streams to clients that connect to it.
I am using the following ffmpeg command which sends UDP data to port 1234, IP address AA.BB.CC.DD
.\ffmpeg.exe -f dshow -i video="Webcam C170":audio="Microphone (3- Webcam C170)" -an
-vcodec libx264 -f mpegts udp://AA.BB.CC.DD:1234
On the linux server I am running the testOnDemandRTSPServer on port 5555 which expects raw UDP data from from AA:BB:CC:DD:1234. I try to open the rtsp stream in VLC using rtsp://AA.BB.CC.DD:5555/mpeg2TransportStreamFromUDPSourceTest
But I get nothing in VLC. What am I doing wrong? How can I fix it?
From what I remember, it was non-trivial to write a DeviceSource class, the problem you're describing is definitely something that's discussed quite frequently on the live555 mailing list - you need to get yourself approved to the list a.s.a.p if you want to do anything related to rtsp development.
The problem you seem to be having is related to the fact that some video formats are written with streaming in mind, and the rtsp server can easily stream certain formats because they contain "sync bytes" and other 'markers' which it can use to determine where frame boundaries end. The simplest solution you could use is to get your hands on the SDK for the camera, and use that to request data from the camera. There are many different libraries and toolkits that let you access data from the camera - one of which would be the DirectX SDK. Once you have the camera data, you would need to encode it into a streamable format, you might be able to get the raw camera frames using DirectX, then convert that to mp4 / h264 frame data using ffmpeg (libavcodec, libavformat).
Once you have your encoded frame data, you feed that into your DeviceSource class, and it will take care of streaming the data for you. I wish I had code on hand, but I was bound by NDA to not remove code from the premises, although the general algorithm is documented on the live555 website, so I am able to explain it here.
I hope you have a bit more luck with this. If you get stuck, then remember to add code to your question. Right now the only thing that's stopping your original plan from working (stream file to VLC) is the file format you chose to stream.
One thing you can try is to increase the logging verbosity level of VLC to 2: VLC expects in-band parameter sets in which case it will print a debug message that it is waiting for parameter sets on the messages window. Just having the parameter sets in the SDP of the RTSP DESCRIBE is not sufficient. IIRC you can configure x264 to output parameter sets periodically or at least with every IDR frame.
Other things you can try:
You can test the stream with openRTSP before using VLC. If you use the openRTSP -d 5 -Q rtsp://xxx.xxx.xxx.xxx:5555/mpeg2TransportStreamFromUDPSourceTest options openRTSP will print quality statistics after streaming for 5 seconds. Then you will be able to verify that the testOnDemandRTSPServer is indeed relaying the stream, and that there is not a problem between the ffmpeg application and the testOnDemandRTSPServer.
Have you tried a different stream? Also, I had a similar problem due to issues with my firewall, you might want to make sure you can actually stream data through those ports.
If you are missing a Sync Byte, it's probably a stream issue - try using a different data source and see if that helps, try an .avi file or an .mp4 file, usually .mp4 files are easy to stream. If the streaming works with the .mp4 file, and not with your mpegts file, then it's a problem in your file - ffmpeg is trying to figure out where each "frame" or "frame set" of data ends so that it can try to stream discrete chunks.
It's been over 2 years since I last worked with this stuff, so let me know if you get anywhere.