Playing around with nginx and wanted to know if there was a way to take an incoming RTMP stream and redirect it to another server based on the URL used. For example:
rtmp://ingress.foo.com/live/<stream_key> would forward to rtmp://<stream_key>.internal.foo.com
Thanks.
Related
I'm trying to implement a plugin for a game, which will communicate via websockets with my server. I want to prevent double connections from the same IP address from different browsers/tabs. I can't use cookies, because the connection is opened from plugin, which runs under a different domain, and cookies can be spoofed anyways, I also don't want to implement any authentication mechanism.
Now I went through myriad of implementations of websocket servers, but I still can't understand if I can communicate with multiples web sockets opened from same IP separately, or rather I want to communicate only with the very first websocket opened from a specific IP and ignore the requests that come from others. Is there any way to "store" a websocket connection on server side during handshake? Because as far as I can see I'm only getting a request and the only thing I can do is pass a parameter or token from client side, which again can be spoofed, so it's really not very different from a regular HTTP request, only with push option.
Thanks in advance.
Is there a way for Squid Proxy to intercept and filter HTTPS payload from request made before it is being sent? Just like the way mitmproxy does.
I found out it's possible to log the payload with ssl-bumping, I had some success also altering response data with sample ecap. Only if I knew some C programming.
Also, iptables has a function to filter payload by string but only with http traffic. I don't know if it's possible to strip the SSL traffic first before feeding it to iptables and send the request outside before sending back (mitm proxy basically).
The only drawback with mitmproxy right now is unable for daemonization (you need to hack it), and I have my doubt using it in a large load.
Any one way or another?
I am trying to use a kubernetes cluster to do on-demand live transcoding of a video stream from multiple ip cameras, and send via websocket to a website.
I've modified a project I found online written in go, which takes a web request with a payload of the rtsp feed URL, it then uses the url to start a FFMPEG process to access the stream and transcode it to MPEG and sends the mpeg data to another endpoint on the go app, which starts a websocket connection. The response to the orignal request includes a websocket url when the stream can be accessed. Which the url can be put into a mpeg js player and viewed in a browser over the websocket. Main advantage is multiple clients can view the stream while there is only one stream coming from camera which reduces mobile data. I will also add that the FFMPEG automatically stops after 60 seconds with a request isn't set to endpoint.
The problem I can't find a solution to, is how can I scale the above application in a kubernetes cluster, so when I request comes in it does the below.
-> Checks to see if someone is already viewing stream
----> If someone is viewing, means a pod and websocket connection is already created. So a url that points to the pod just needs to be sent back to client.
----> If no one is viewing, a streaming/websocket pod needs to be created and once created a url sent back to the client to access stream.
I've looked at ingress controllers possibly dynamically updating the ingress resource is a solution, or even possible using a service mesh, but these are all new to me.
If anyone has any input to give me a bit of direction down a path, it would be much appreciated.
Many Thanks
Luke
Having your application dynamically configure K8s sounds like a complicated solutions with possible performance implications due to a very large object of ingress objects. You might have to look into how to clean those up. Also, doing this requires a K8s client in your app.
I would suggest to solve this without the help of K8s resources by simply having your application return a HTTP redirect (301) to that websocket URL if need be.
I want to implement a P2P photo sharing application.Scenario is like this:
A is online and he would like to share his photos with B. Through some server, B gets A's IP address and access A's photos directly.
Is it possible to implement using WebRTC or Websocket ? Please give me some inputs,
Thanks
I implemented P2P file transfer on websockets with very small nodejs server. But it works fine only on Chrome thanks to "download" tag. I also have to have middleware server so it's not STRICTLY P2P. My node.js server never has full file it just transfer current file chunk from one websocket connection to another.
I hope WebRTC will help you to implement what you wish more smouthly, without any middleware.
I am planning to integrate Apache httpd server with Tomcat using the proxy module to forward certain addresses to be processed by Tomcat. However I wanted to ask if it is possible to combine the output from Tomcat with content from apache httpd so that they are returned to the client as part of one html page? (no frames or funny business)
When you configure Apache mod_proxy as a reverse proxy then you will get exactly what you are asking for.
From the definition of reverse proxy
The client makes ordinary requests for
content in the name-space of the
reverse proxy. The reverse proxy then
decides where to send those requests,
and returns the content as if it was
itself the origin.
So the requests meant for dynamic content from Tomcat (based on the URL pattern) will be fetched via Apache and returned to the browser. For the end user, it's completely transparent. No frames or funny business :)
Related Reading:
http://www.apachetutor.org/admin/reverseproxies
http://httpd.apache.org/docs/2.0/mod/mod_proxy.html