Does anybody know, how to check if broadcast is online or offline in Icecast2 server?
Ruby preferred.
I guess you can make a TCP (HTTP) connection to specified server. Icecast server works as a regular HTTP server but data transfers are actually streams. So, all you need is to make regular Socket connection and send request (you can grab it from live http headers extension in firefox). Also, you might want to set timeout in case that server is down. And if server responds with HTTP/OK (200) code then its live.
Related
I want to make the client of proxy server keepAlive. Thus, I don't want the proxy client to make a tcp close handshake everytime.
Please look at this example in netty.
Adding the keepAlive option to this example doesn't seem to work properly. Because it makes a client and connect everytime the server get request and close the client when the response is arrived.
Then how can I make my proxy client keepAlive? Is there any reference/example for it?
Using SO_KEEPALIVE socket option doesn't mean that the server (or the other peer in the connection) should ignore an explicit request to close the connection. It helps in cases like
Idle sessions timing-out/getting killed by the other end due to non-activity
Idle or long-running requests being disconnected by a firewall in-between after a certain time passes (e.g. 1 hour, for resource clean-up purposes).
If the client's logic is not to re-use the same socket for different requests (i.e. if its application logic uses a new socket for each request), there's nothing you can do about that on your proxy.
The same argument is valid for the "back-end" side of your proxy as well. If the server you're proxying to doesn't allow the socket to be re-used, and explicitly closes a connection after a request is served, that wouldn't work as you wanted either.
If you are not closing the connection on your side then the proxy is. Different proxy servers will behave in different ways.
Try sending Connection: Keep-Alive as a header.
If that doesn't work, try also sending Proxy-Connection: Keep-Alive as a header.
Preamble.
I have some specific application (called LinkBit PacketCraft) for network signaling testing. Scripts of this App have a specific procedure for opening a socket for requests receiving (in my case it is SIP over UDP and HTTP requests) that consist of two blocks: "TCP/IP Control.Open Request" and "TCP/IP Control.Open Confirm" with such parameters as IP(v4/v6), port and protocol (TCP/UDP). I don't know what they exactly do, but as a result after this procedure I can receive requests on specified ip/port.
The Problem.
It worked well until our IT engineers re-installed OS (Windows Server 2008 R2). Don't ask me why, just should be. After re-installation I have one server where it works as well and one server where it doesn't work. My script show me that socket opened successfully, I do see incoming requests (SIP over UDP and HTTP) in the WireShark on this machine, but application doesn't receive them.
I have completely the same script and the same version of application on another server where it works.
Our IT-service can't find any difference between servers configuration, but I don't believe them.
Who may knows which setting or configuration may be responsible for requests delivery to application?
P.S. Just one remark. If I send SIP HTTP request in my script, the application can receive requests and responses to the same socket was used for sending.
The problem was in the Firewall. I received all requests when I disabled it.
I am trying to relay a stream that is being broadcasted over HTTPS, is there a way to be able to do that? The documentation describes how to broadcast with https using listen-socket which I think is not what I want. All the help would be appreciated
I tried relaying normal http streams and it works. But not with https
I tried doing it both with including https and without in the url
`<relay>
<server>https://streamingurl.com</server>
<port>800</port>
<mount>/f</mount>
<local-mount>/f</local-mount>
<on-demand>0</on-demand>
<relay-shoutcast-metadata>0</relay-shoutcast-metadata>
</relay>`
Unfortunately this is currently not possible.
A good workaround for this problem is to set up a reverse proxy using nginx. I did this to access a https stream over http and icecast2 is able to relay it without issues.
What is the origin server you are trying to relay? Another Icecast or something else?
The -kh fork if Icecast supports SSL and has a lot of extensions and may be able to relay a https stream. (Sorry I'm not more help with that) see https://karlheyes.github.io
You're not supposed to include http or https in the context, just the address.
<relay>
<server>sourceip</server>
<port>443</port>
<mount>/sourcemount</mount>
<local-mount>/localmount</local-mount>
<on-demand>0</on-demand>
<relay-shoutcast-metadata>1</relay-shoutcast-metadata>
</relay>
I just tested that with a -kh branch icecast server, and it worked, BUT I wasn't able to confirm it was actually making a SSL connection; but it is making a connection. The kh fork or Icecast will accept http or https over 80 or 443 (or any other port for that matter).
I am serving content locally, accessible through http://0.0.0.0:4000. That works ok, I get a correct webpage, which contains the following line inside a script:
var socket = io('http://example.com');
i.e. I am referencing an external server. Now my browser shows the followoing error:
GET http://example.com:4000/socket.io/?EIO=3&transport=polling&t=1417447089410-1 net::ERR_CONNECTION_REFUSED
That is, the browser is trying to connect using the same port that it used to get the original page.
Everything works fine when both the SocketIO server and the web server listen on the same port.
Am I missing something? Is this a bug? Is there a workaround? Thank you.
You can read here about how a plain webSocket is initially set up. It all starts with a somewhat standard HTTP GET request, but one that has some special headers set:
GET /chat HTTP/1.1
Host: example.com:8000
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==
Sec-WebSocket-Version: 13
The interchange may also allow the host to enforce requests only from web pages on certain origins. While this header can be spoofed from non-web-browser agents (so the server has to be prepared for that), it will likely be correct when the OP is using a real browser (assuming no proxy is modifying it).
If the server accepts the incoming request, it will then return an HTTP response that looks something like this:
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: s3pPLMBiTxaQ9kYGzzhZRbK+xOo=
At this point, the socket which used to be an HTTP socket is now a webSocket and both endpoints have agreed that they're going to use the webSocket data format from now on. This initial connection may be followed by some form of authentication or new or existing cookies can also be used in the authentication during the initial HTTP portion of the connection.
socket.io adds some enhancements on top of this by initially requesting a particular path of /socket.io and adding some parameters to the URL. This allows socket.io to negotiate whether it's going to use long polling or a webSocket so there are some exchanges between client/server with socket.io before the above webSocket is initialized.
So, back to your question. The socket.io server simply spies at all incoming web requests on the normal web port (and looks for both it's special path and for special headers to indicate a webSocket initiation rather than a classic HTTP request). So, it runs over the same port as the web server. This is done for a bunch of reasons, all of which provide convenience to the server and server infrastructure since they don't have to configure their network to accept anything other than the usual port 80 they were already accepting (or whatever port they were already using for web requests).
By default in socket.io, the domain and port will default to the same domain and port as the web page you are on. So, if you don't specify one or the other in your connect call, it will use the domain or port from the web page you are on. If you want to use both a different domain and port, then you must specify both of them.
I am working on a dropbox like system and I am wondering how the client gets notified when the files change on the server side. It is my impression that both dropbox and ubuntu one operate over HTTP ports and work as follows:
1. if files change on client machine, inotify detects it and preforms a push from the client to the server. (I get this part)
2. if files change on the server a simple unsolicited notification (just a message saying "time to sync") is sent from the server to the client. Then the client initiates a sync to the server.
I dont really care which language I do this in. I am just wondering how the client gets contacted. Specifically, what if a client is behind a firewall with its own local IP addresses. How does the server locate it?
Also, what kind of messaging protocols would be used to do something like this? I was planning on doing this over HTTP or SSH, but I have no attachment do that.
I'm not sure what Dropbox is using, but it could be websockets (unlikely, it's a pretty new and not widely deployed thing) or more likely a pending Ajax request from the client to the server -- to which the server only responds when it has new stuff for the client. The latter is the common way to implement (well, OK -- "hack";-) some form of "server push" with HTTP.
It took a little research into networking to see how this would work, but it is far more trivial then I expected. I am now using standard Java sockets for this. Start up the server process which listens for a socket connection. Then start up the client which connects to the server.
Once the connection is made, messages can be sent back and fourth. This works through NAT (network address translation) which is standard method for routing packets on private networks behind a firewall.