How to make delay Live Streaming (Using FMS + FMLE)? - amazon-ec2

I've tried this tutorial to provide HTTP live streaming from a camera, using FMS 4.5, and FMLE 3.2, and I succeeded.
Now I need to add a delay (1~10 minutes) on the HTTP streaming. My purpose is so that the client can view the video 10 minutes later than when it is first obtained from the camera
I'm using Amazon Cloud, specifically EC2 (include FMS 4.5).
Is this possible?

Set the server to have a DVR buffer. You can add the dvrinfo tag to the manifest.xml file and then specify the beginOffset to the delay you want.

Related

How to (properly) use the ExtendRtspStream command in Google's Nest Device Access API?

Using Google's Nest Device Access API, I can generate an RTSP camera stream using the GenerateRtspStream command and subsequently stop the stream using the StopRtspStream command. Makes sense, however these streams are only alive for 5 minutes by default - so the API also features another command: ExtendRtspStream.
On the face of it, this sounds like it should "extend" the stream you had originally created, however these RTSP stream urls include an auth query parameter and extending a stream simply issues a new token to use for this, which means that the url for the stream changes every time it gets extended. So in reality the stream isn't getting extended at all as the url you use to access the stream still gets invalidated, and you have to restart it with a new url to continue watching the stream. So what's the point? You may as well just call the GenerateRtspStream command and switch over to that one once the first expires. Is there some way to seamlessly change the RTSP url mid-stream that I'm not aware of using FFMPEG, perhaps? Or to have a proxy server that broadcasts a static RTSP url and seamlessly switches the actual url each time it gets extended?
Rant starts here: I'm really hoping that this behaviour is actually a bug or oversight in the design of the API, and that ExtendRtspStream is supposed to keep the same url alive for as long as needed, because it's awfully pointless to have an RTSP stream that only stays alive for a max of 5 minutes. Heck, it'd be more useful to have an API that just returns the latest single-image snapshot from the camera every 10 seconds or so - but alas, there's no API for that either.

laravel queue in multi servers

I am going to make UGC project. I have a Main Server (Application) & 4 Encoder Servers (fro converting videos) and a Storage server (for hosting videos).
I want to use database driver in laravel queue and my target database is jobs. for each uploaded videos I have 5 certain jobs that convert video to 240p, 360p, 480p, 720p & and 1080p.
but jobs Does not specify belongs to which Encoder servers. for example a video uploaded in Encoder Server #4 , but Encoder Server #2 try to start job and get failed because files are in Encoder Server #4
how can I solve this chanlenge?
As #apokryfos says, upload the file to a shared storage like Amazon S3 / Google Cloud Storage/ Digital Ocean spaces) .. whatever..
Have the processing job download it from a central store, process it, and upload the result to another central storage.
If you bind a single job to a single worker, (encoder server) as you call it,
It does not make any sense, and it will never be scalable, and are pretty much doomed, to run into issues.
Doing it this way, you can just scale up the number of workers, once you need it.
you could even auto scale them.
Consider using a kubernetes deployment, to allow easy (auto) scaling.

Video stream recorder through JMeter

I would like to know is it possible to do video stream record using JMeter. Since i was recording few scenario and got stuck when it comes to start video and play.
Please let me know if i need to do some setting changes/ any links that explains the same.
Thanks In Advance
JMeter's HTTP(S) Test Script Recorder can only record HTTP or HTTPS protocols, it will be able to "record" the video only if it's delivered via HTTP GET request, otherwise JMeter won't recognise the traffic.
Depending on the nature of your video you might be able to use JMeter HLS Plugin to replicate the real user watching the movie network traffic footprint

using ffmpeg to live stream to Youtube

I created an app in Windows to live stream the content of all the screen or only a portion of it (a Windows) to Youtube.
I used this app but I still have a problem I'm not able to understand.
I use different internet connections: ADSL at home 30Mbit, or ADSL router outside ad 2.5Mbit.
In any case, after starting ffmpeg to live stream the fps strats growing from 300 to 2000 the transmission is perfect for some minutes, then the fps slowdown until a very low value for the bitrate of the Youtube streaming. The image is no more clear and disappears, the audio is still working. The CPU is still under the 35-40% of usage.
ffmpeg must be restarted to get another 5-7 minutes of good transmission.
I tryed changing the ffmpeg command line but nothing seams to influence this behaviour.
This is because I still don't undestand Where the problem is. Any suggestions?
A log of a single session (aprox. 20 minutes) is available here http://www.mbinet.it/public/ffmpeg-20180106-094446.txt
Another (aprox. 5 min.) is available here http://www.mbinet.it/public/ffmpeg-20180106-105529.txt
Thanks

WP7 BackgroundAudioAgent: get Meta data from Icecast

I have a Windows Phone project that streams radio stations from an Icecast server.
I am using Background Audio Agent to play the streams.
The Icecast stream provides Track Title - Artist name as the metadata.
Is there any way I can fetch the metadata from the Audio Player?
Right now I am fetching the metadata from a PHP script every 10 secs. If I get it directly from Icecast, it will be good.
In my IPhone application I am able to see the metadata. I am using the video player in IPhone app.
Tell me whether it can be done or not.
If not, please tell me whether I can read the stream byte by byte and send it to the Audio agent.
Thanks.

Resources