I want to capture the video buffering instances(like the time duration of the buffering) for a particular video. How can I achieve this using Load Runner/Jmeter tool?
For Apache JMeter, you can use this plugin which computes buffering:
http://www.ubik-ingenierie.com/blog/load-testing-mpeg-dash-jmeter-ubik-videostreaming-plugin/
http://www.ubik-ingenierie.com/blog/ubikloadpack-http-live-streaming-plugin-jmeter-videostreaming-mpegdash/
It handles HLS, MPEG-DASH and MS Smooth streaming video formats.
Disclaimer: we provide this solution
You capture this in a single user session. These actions (buffering) are local processing.
Related
I search some article that tell me should convert the mp4 first,then wait the request and send the ts and m3u8.
But i looking for a way , that is when the request comes , then i will start to convert the video , and send the m3u8 immediately when the Conversion is not finish.
If the request come , but the ts file not ready ,then wait still the file ready and send it immediately .
Is that possible to do something like this? or can use another way to have the same effect?
When you start with a single bit rate MP4 and want to serve it as a HLS or MPEG-DASH (usually just called DASH) stream you typically do a number of steps:
transcode the video into however many bit rate versions you want
split the video into a segmented or fragmented format to allow HLS or MPEG-DASH streaming
'Package' into the particular streaming protocol you want for the device you are streaming to, which is usually HLS or DASH these days.
Assuming the video is not a live stream, it is common for the transcoding and splitting to be done initially when the video is first ingested into the system.
The packaging is then applied 'Just In Time' when the user or client requests the video. Note, that the transcoding and splitting and even packaging can be combined in a single step, with some cloud encoding services offering exactly that service, however, 'Just In Time' packaging is still very common.
The main reason for not doing 'Just In Time' transcoding also is that transcoding is processor intensive. Being able to schedule it when you have spare computing resources or can allow it plenty of time to complete is often the most cost effective approach.
It is definitely possible to do 'Just In Time' transcoding - this is what Live Streams have to do anyway. However, what you save in storage costs may be eaten (several times over, sometimes) by processing costs so it is a business decision as much as a technical decision.
I am very new to the video world, but have noticed social media services.. particular snapchat and instagram do a great job of getting videos to load fast even on poorer connections. I know some of this is how the videos are transcoded.
I have gathered some presets I think I should be using when transcoding with ffmpeg, but am not sure of what formats or other parts of it. I would love to hear what people think!
ffmpeg()
.input(remoteReadStream)
.outputOptions('-preset fast')
.outputOptions('-movflags +faststart')
Other than that I am not entirely sure what else..
If you want fast start of the video you must ensure that the first frame is key-frame. Use -force_key_frames '00:00:00.000' parameter of ffmpeg for such task.
But in fact the main method for fast video response on poor connections is adaptive bitrate streaming (https://en.m.wikipedia.org/wiki/Adaptive_bitrate_streaming). It selects video source with bitrate apropriate for user bandwith. So you need to encode your video in different sizes with different qualities and bitrates and assemble them in special playlist for adaptive streaming.
I was reading about the -re option in ffmpeg .
What they have mentioned is
From the docs
-re (input)
Read input at the native frame rate. Mainly used to simulate a grab device, or live input stream (e.g. when reading from a file). Should not be used with actual grab devices or live input streams (where it can cause packet loss). By default ffmpeg attempts to read the input(s) as fast as possible. This option will slow down the reading of the input(s) to the native frame rate of the input(s). It is useful for real-time output (e.g. live streaming).
My doubt is basically the part of the above description that I highlighted. It is suggested to not use the option during live input streams but in the end, it is suggested to use it in real-time output.
Considering a situation where both the input and output are in rtmp format, should I use it or not?
Don't use it. It's useful for real-time output when ffmpeg is able to process a source at a speed faster than real-time. In that scenario, ffmpeg may send output at that faster rate and the receiver may not be able to or want to buffer and queue its input.
It (-re) is suitable for streaming from offline files and reads them with its native speed (i.e. 25 fps); otherwise, FFmpeg may output hundreds of frames per second and this may cause problems.
I'd like to capture multiple real-time video streams arriving on rtp protocol, using ffmpeg. When I initiate the recording, by issuing the ffmpeg <command line parameters> command, it always takes a while for the connection to built up and the actual recording to begin. This might be more than 2 seconds in certain cases, which cause a constant time difference at the replay.
How can I extract the information containing the time of the first actually recorded frame from ffmpeg? If it's not possible with ffmpeg without editing the source (which I did, and would like to avoid for other reasons), is there any similar multi-platform open-source tool which could be used?
Not possible without effort on your side. Use something like live555 to capture your streams. All your sources must synchronize to a single clock using ntp and then rtp timestamps can be used at the receiver end to synchronize the various streams. This is not trivial and is used in video conferencing systems. I am not aware of any free implementation of the same.
If you do not have control over the sources then you are out of luck because there is no such things as a common base time across the streams. If you do, you still need to modify live555 and your player to synchronize using the timestamps on the streams and the ntp clock. Like I said, not trivial.
Perhaps gstreamer might already have plugins for it, its been a while since I used it so I am not sure. You could take a look there. (gstreamer.net).
I have a link to some video stream (web cam that is always recording some place). I would like to be able to take a screenshot of what ever is on that video stream at the moment a user goes to my app.
Can it be done and how?
I have looked but all I could find was for taking screenshots out of a movie/video, not out of a streaming video.
I suspect ffmpeg connected to the streaming service as an input could probably extract thumbnails for you. You could either leave it running and pick up latest thumbnails, or fire it up with a system command and make it connect and emit a single screenshot. The latter would be more efficient and easier to code if you have a low number of hits, but would have a high latency on each request.
I did a quick search for you, but the most common uses of ffmpeg with streaming input is to re-format and re-stream, or to use it in personal video recorder setup. Ffmpeg is quite complex, so I could not complete the search in the time I have had so far.