Icecast seems play stream multiple instance - caching

I set up an icecast server with ices0 source client.
Everything works fine but sometimes it seems as if play multiple instance from the stream. For example when a track gets its end and I refresh the stream I hear some seconds again from the end of the last track not the beggining from the new one.
Can you help me what happens?
Thank you in advance!

When you connect to an Icecast served stream, Icecast usually sends a "burst" of data to reduce the time a client has to buffer before playing something. Depending on the burst-size, you can end up getting a bit of the "old" data. This is completely normal. You can disable the burst but that's not really advisable in general as it will cause longer buffering times. Depending which clients you serve though, this might not be a concern in your use-case.
Another note: Icecast is not intended for low-latency streaming, so if this is what you are looking for, this is not something Icecast can do.

Related

How to (properly) use the ExtendRtspStream command in Google's Nest Device Access API?

Using Google's Nest Device Access API, I can generate an RTSP camera stream using the GenerateRtspStream command and subsequently stop the stream using the StopRtspStream command. Makes sense, however these streams are only alive for 5 minutes by default - so the API also features another command: ExtendRtspStream.
On the face of it, this sounds like it should "extend" the stream you had originally created, however these RTSP stream urls include an auth query parameter and extending a stream simply issues a new token to use for this, which means that the url for the stream changes every time it gets extended. So in reality the stream isn't getting extended at all as the url you use to access the stream still gets invalidated, and you have to restart it with a new url to continue watching the stream. So what's the point? You may as well just call the GenerateRtspStream command and switch over to that one once the first expires. Is there some way to seamlessly change the RTSP url mid-stream that I'm not aware of using FFMPEG, perhaps? Or to have a proxy server that broadcasts a static RTSP url and seamlessly switches the actual url each time it gets extended?
Rant starts here: I'm really hoping that this behaviour is actually a bug or oversight in the design of the API, and that ExtendRtspStream is supposed to keep the same url alive for as long as needed, because it's awfully pointless to have an RTSP stream that only stays alive for a max of 5 minutes. Heck, it'd be more useful to have an API that just returns the latest single-image snapshot from the camera every 10 seconds or so - but alas, there's no API for that either.

Does the "live=1" on ffmpeg rtmp urls mean that the stream is live and you cannot rewind or pause it?

Or does it have some other meaning? I have searched all over the internet, and the documentation is very thin on it... If someone could point me to something that explains exactly what it is, I would appreciate it.
I am talking about this:
ffmpeg "rtmp://...... live=1" .....
tia.
Short answer is yes.
RTMP has live streaming support and vod support. 'live=1' means the rtmp is running a live streaming. In this case, the media server is receiving video feed from source in real-time. Therefore, rewind back to a previous time is not a supported action. Without 'live=1', RTMP is running on vod mode, which means the entire video pre-exist on media server, then the server is capable of rewinding back, or seek to a random position of the video.
Although technically, on client side (preferably with a software, not webpages), if you maintain buffer your self, you can rewind or pause one way or another. Since you are saving data as you are receiving from media server and everything is under your control, you will be capable of rewind and pause live streams. But you will have to implement the buffering and decoding mechanism yourself. ffmpeg command will not be able to help on this.

AJAX Polling Question - Blocking Or Frequent?

I have a web application that relies on very "live" data - so it needs an update every 1 second if something has changed.
I was wondering what the pros and cons of the following solutions are.
Solution 1 - Poll A Lot
So every 1 second, I send a request to the server and get back some data. Once I have the data, I wait for 1 second before doing it all again. I would detect client-side if the state had changed and take action appropriately.
Solution 2 - Block A Lot
So I start a request to the server that will time-out after 30 seconds. The server keeps an eye on the data on the server by checking it once per second. If the server notices the data has changed it sends the data back to the client, which takes action appropriately.
Scenario
Essentially, the data is reasonably small in size, but changes at random intervals based on live events. The thing is, the web UI will be running something in the region of 2,000 instances, so do I have 2,000 requests per second coming from the UI or do I have 2,000 long-running requests that take up to 30 seconds?
Help and advice would be much appreciated, especially if you have worked with AJAX requests under similar volumes.
One common solution for such cases is to use static json files. Server-side scripts update them when the data is changed and they are served by fast and light webserver (like nginx). Since files are static and small - webserver will do that right in cache, in very fast manner.
Consider a better architecture. Implementing this kind of messaging system is trivial to do right in something like nodeJS. Message dispatch will be instantaneous, and you won't need to poll for your data on either side.
You don't need to rewrite your whole system: The data producer could simply POST the updates to the nodeJS server instead of writing them to a file, and as a bonus, you don't even need to waste time on disk IO.
If you started without knowing any nodeJS, you could still be done in a couple hours, because you can just hack up the chat example.
I can't comment yet, but I would agree with geocar. Running live or almost live web services with just polling will be solution stuck between a rock and a hard place.
You could also look into web sockets to allow push as this sounds a better solution for this than just updating every second to 30 seconds.
Good luck!

Bittorrent protocol 'not available'/'end connection' response?

I like being able to use a torrent app to grab the latest TV show so that I can watch it at my lesiure. The problem is that the structure of the protocol tends to cause a lot of incoming noise on my connection for some time after I close the client. Since I also like to play online games sometimes this means that I have to make sure that my torrent client is shut off about an hour (depending on how long the tracker advertises me to the swarm) before I want to play a game. Otherwise I get a horrible connection to the game because of the persistent flood of incoming torrent requests.
I threw together a small Ruby app to watch the incoming requests so I'd know when the UTP traffic let up:
http://pastebin.com/TbP4TQrK
The thought occurred to me, though, that there may be some response that I could send to notify the clients that I'm no longer participating in the swarm and that they should stop sending requests. I glanced over the protocol specifications but I didn't find anything of the sort. Does anyone more familiar with the protocol know if there's such a response?
Thanks in advance for any advice.
If a bunch of peers on the internet has your IP and think that you're on their swarm, they will try to contact you a few times before giving up. There's nothing you can do about that. Telling them to stop one at a time will probably end up using more bandwidth that just ignoring the UDP packets would.
Now, there are a few things you can do to mitigate it though:
Make sure your client sends stopped requests to all its trackers. This is part of the protocol specification and most clients do this. If this is successful, the tracker won't tell anyone about you past that point. But peers remember having seen you, so it doesn't mean nobody will try to connect to you.
Turn off DHT. The DHT acts much like a tracker, except that it doesn't have the stopped message. It will take something like 15-30 minutes for your IP to time out once it's announced to the DHT.
I think it might also be relevant to ask yourself if these stray incoming 23 byte UDP packets really matter. Presumably you're not flooded by more than a few per second (probably less). Have you made any actual measurements or is it mostly paranoia to wait for them to let up?
I'm assuming you're playing some latency sensitive FPS, in which case the server will most likely blast you with at least 10-50 full MTU packets per second, without any congestion control. I would be surprised if you attract so many bittorrent connection attempts that it would cause any of the game packets to be dropped.

Is there an alternative of ajax that does not require polling without server side modifications?

I'm trying to create a small and basic "ajax" based multiplayer game. Coordinates of objects are being given by a PHP "handler". This handler.php file is being polled every 200MS, by using ajax.
Since there is no need to poll when nothing happens, I wonder, is there something that could do the same thing without frequent polling? Eg. Comet, though I heard that you need to configure server side applications for Comet. It's a shared webserver, so I can't do that.
Maybe prevent the handler.php file from even returning a response if nothing has to be changed at the client, is that possible? Then again you'd still have the client uselessly asking for a response even though something hasn't changed yet. Basically, it should only use bandwidth and sever resources if something needs to be told to the client, eg. the change of an object's coordinates.
Comet is generally used for this kind of thing, and it can be a fragile setup as it's not a particularly common technology so it can be easy not to "get it right." That said, there are more resources available now than when I last tried it ~2 years ago.
I don't think you can do what you're thinking and have handler.php simply not return anything and stop execution: The web server will keep the connection open and prevent any further polling until handler.php does something (terminates or provides output). When it does, you're still handling a response.
You can try a long polling technique, where your AJAX allows a very large timeout (e.g. 30 seconds), and handler.php spins without responding until it has something to report, then returns. (You'll want to make sure the spinning is not resource-intensive). If handler.php "expires" and nothing happens, have it exit and let AJAX poll again. Since it only happens every 30 seconds, it will be a huge improvement over ~5 times a second. That would keep your polling to a minimum.
But that's the sort of thing Comet is designed for.
As Ajax only offers you a client server request model (normally termed pull, rather than push), the only way to get data from the server is via requests. However a common technique to get around this is for the server to only respond when it has new data. So the client makes a request, the server hangs on to that request until something happens and then replies. This gets around the need for frequent polling even when the data hasn't changed as you only need the client send a new request after it gets a response.
Since you are using PHP, one simple method might be to have the PHP code call the sleep command for 200ms at a time between checks for data changes and then return the data to the client when it does change.
EDIT: I would also recommend having a timeout on the request. So if nothing happens for say 2 seconds, a "no change" message is sent back. That way the client knows the server is still alive and processing its request.
Since this is tagged “html5”: HTML5 has <eventsource> and WebSocket, but the implementation side is still in the future tense in practice.
Opera implemented an old version of <eventsource> called <event-source>.
Here's a solution - use a SaaS comet provider, such as WebSync On-Demand. No server resources to worry about, shared hosting or not, since it's all offloaded, and you can push out the information as needed.
Since it's SaaS, it'll work with any server language. For PHP, there's already a publisher written and ready to go.
The server must take part in this. Check with the hosting provider what modules are available. Or try to convince them to support Comet.
Maybe you should consider a small Virtual Private Server (VPS) for this.
One thing to add on the long polling suggestions: If you're on a shared server, this solution will have limited scalability, as each active long poll will keep a connection (and a server-side process to service that connection) active. Your provider most likely has limits (either policy-defined or de facto) on the number of connections you can have open at a time, so you'll hit a wall if you have more sessions/windows than that playing concurrently.

Resources