How to deploy a flask socket io application on IIS server? - websocket

my use case is
I am trying to build an API that takes images as input and does some
image processing operations and return the output JSON back to the
client.
Multiple clients can concurrently request Server and as the server
does take 2 to 3 minutes time to process.
Initially I thought of a normal flask Application, where client
would poll the server for a response on a timely basis
But as Flask-SocketIO can respond back to the client event-based, I
want to use Flask-SocketIO
As the other APIs in my project are hosted on IIS, I wanted to use
the same IIS as the hosting server
my questions are
Can I use Flask-SocketIO for my use case, where API takes 2 to 3
minutes to respond back
If not IIS, how to deploy flask-socketIO on
the windows machine, I have gone through the documentation but I did
not find any deployment strategy for hosting it on windows machine
The best way to achieve concurrency in this case
Thanks in advance
Prasad.

Related

How is it possible to check if limitation of outbound requests from IIS or windows server has been exceeded?

I have a web application hosted on iis on windows server. My API uses external APIs as a source for data. There is metrics in my API that show that when there are too many requests to my API, getting response from external APIs takes longer than usual and that elapsed time is usually 3 seconds+time that was really spent by external API to proccess request.
External APIs in fact are soap-services on ESB, if it matters.
Metrics of external APIs and metrics of net channels between my server and ESB and between ESB and external servers don't show these 3 seconds anywhere. It happens not with all external APIs, but with most of them. It seems like it never happens to database requests from my API.
So it seems like there might be a limitation of outbound requests from iis or from app pool or from windows server itself. But I don't know how to check it, where to look. May be perfomance monitor, but there are tons of metrics - what should I choose?

Should socket.io server update the database?

I’m bulilding a web app that requires communication between clients. For this I’m using socket.io. Some data however has to be updated regularly in the database.
Some of them not that often (preferences, on button click) others in every second for example a timer value. This can not be calculated because the timer can be paused.
Right now whenever a client emits an event, it also makes a request to the backend to updated the database. I was wondering if it would be a good idea to have the socket.io server update the database so the clients would only have to take care of the socket communication? It seems to me that having the browser do a request to the backend is a bit resource heavy and takes out a bit from the advantages of the socket based communication
Edit: the back end of the app and the socket server are two different servers but physically they are on the same machine so their communication could be faster
the main point of using socket.io is that it allows you to push data to clients and clients do not need to check your server constantly to get the last changes, and providing a low-overhead communication channel between the server and the client.
you can call an API and also emit data and many other things on user click in your application.
it is a good idea to have the socket.io server update the database and you can also authorize each socket, save client sockets information and ...

How to measure performance when different servers are involved to get response?

How a request goes from one server to another before sending response back to web browser?
When tried online, I got web browser to web server, but what about other servers like app server and Db server etc.?

Using SignalR to push to clients from a long running process

Firstly, here is state of my application:
I have a request coming in from a client (angularjs app) into my API (web api 2). This request is processed and a record is stored in a database. A response is then sent back to the client.
Currently, I have a windows service polling and processing this record(s).
Processing this record can be long running. As a side effect to processing this record, there might be notifications generated to be sent back to one or more clients.
My question is how do I architect this, such that I can utilise SignalR to be able to push the notifications back to the client.
My stumbling block:
I can register and store (in-memory backed by a db) the client's SignalR connectionid along with the application's own user identifier. This way I can match a generated notification with a signalr client.
At the moment, I'm hosting the SignalR hubs within the IIS process. So how do I get back from the Windows Service to IIS to notify the client when a notification is generated?
Furthermore, I should say I am already using SignalR elsewhere in the application and am using a SQL Server backplane.
The issue's with the current architecture:
Any processing is done in the same web request, and notifications are sent out via SignalR before a response to the client is returned. Luckily, the processing is minimal and very quick.
I think this is not very good in terms of performance or maintenance in the long run.
Potential solutions:
Remove SignalR hubs from IIS and host them somewhere else - windows service?
Expose an endpoint on the API to for the windows service to call to push the notification once a notification is generated?
Finally, to add more ingredients to the mix: Use a service bus to remove the polling component of the windows service, and move to a pub/sub architecture. Although this is more work than I want to chew off right now.
Any ideas/recommendations/constructive criticisms are welcome.
Thanks.
Take a look at this sample for starters
Another more advanced solution can be using a backplane to manage the communications between the front end and the backend...
HTH

One Web API calls the other Web APIs

I have 3 Web API Servers which have the same functionality. I am going to add another Web API server which will be used only for Proxy. So All clients from anywhere and any devices will call Web API Proxy server and the Proxy server will transfer randomly the client requests to the other 3 Web API servers.
I am doing this way because:
There are a lot of client requests in a minute and I can not use only 1 Web API server.
If one server was dead, clients can still send request to the other servers. (I need at least 1 web servers response to the clients )
The Question is:
What is the best way to implement the Web API Proxy server?
Is there a better way to handle high volume client requests?
I need at least 1 web server response to the clients. If I have 3 servers and 2 of them are dead.
Please give me some links or documents that can help me.
Thanks
Sounds like you need a reverse proxy. Apache HTTP Server and NGINX can be configured to act as a load balanced reverse proxy.
NGINX documentation: http://nginx.com/resources/admin-guide/reverse-proxy/
Apache HTTP Server documentation: http://httpd.apache.org/docs/2.2/mod/mod_proxy.html
What you are describing is call Load Balancing and Azure (which seems you are using from your comments) provides it out of the box both for Cloud Services and Websites. You should create as many instances as you like under the same cloud service and open a specific port (which will be load balanced) under cloud service endpoints.

Resources