asp.web api load balancing system - asp.net-web-api

I want to build a asp.net web api server that can re-route incoming HTTP request to other web api servers. Main server will be master and its only job will be accepting and routing to other servers. Slave servers will inform master server when they started and ready to accept http requests. Slave servers must not only inform they alive but they should send which api they support. I think I have to re-map routing tables in master server in runtime. Is it possible?
This seems like load balancing according to functionality . Is there any way to do this ? I have to write a load balancer for web api any suggestion is welcome.

Related

I need to build a Vert.x virtual host server that channels traffic to other Vert.x apps. How is this kind of inter-app communication accomplished?

As illustrated above, I need to build a Vert.x Java app that will be an HTTP server/virtual host (TLS Http traffic, Web socket traffic) that will redirect/channel specific domain traffic to other Vert.x Java apps running on the same server, each in it's own JVM.
I have been reading for days but I remain uncertain as to how to approach all aspects of the task.
What I DO know or have experience with:
Creating an HTTP server, etc
Using a Vert.x VirtualHost handler to "handle" incoming traffic for a
specific domain
What I DO NOT know:
How do I "re-direct" a domain's traffic to another Vert.x app (this
other Vert.x app would also be running on the same server, in its own
JVM).
- Naturally this "other" Vert.x app would need to respond to HTTP
requests, etc. What Vert.x mechanisms do I employ to accomplish this
aspect of the task?
Are any of the following concepts part of the solution? I'm unfamiliar with these concepts and how they may or may not form part of the solution.:
Running each Vert.x app using -cluster option?
Vert.x Streams?
Vert.x Pumps?
There are multiple ways to let your microservices communicate with each other, the fact that all your apps are running on the same server doesn't change much, but it makes number 2.) easy to configure
1.) Rest based client - server communication
Both host and apps have a webserver
When you handle the incoming requests on the host, you simply call another app with a HttpClient
Typically all services find each others address via service discovery.
Eg: each service registers his address in a central registry then other services use this central registry to find the addresses.
Note: this maybe an overkill for you and you can just configure the addresses of the other services.
2.) You start the vertx microservices in clustered mode
the eventbus is then shared among the services
For all incoming requests you send a broadcast on the eventbus
the responsible app replies to the message
For further reading you can checkout https://vertx.io/docs/vertx-hazelcast/java/#configcluster. You start your projects with -cluster option and define the clustering in an xml configuration. I think by default it finds the services via local broadcast.
3.) You use a message broker like RabbitMq etc.
All your apps connect to a central message broker
When a new request comes in to the host, it sends a message to the message broker
The responible app then listens to the relevant messages and replies
The host receives the reply from the message broker
There are already many existing vertx clients for certain message brokers like kafka, camel, zeromq:
https://github.com/vert-x3/vertx-awesome#integration

One Web API calls the other Web APIs

I have 3 Web API Servers which have the same functionality. I am going to add another Web API server which will be used only for Proxy. So All clients from anywhere and any devices will call Web API Proxy server and the Proxy server will transfer randomly the client requests to the other 3 Web API servers.
I am doing this way because:
There are a lot of client requests in a minute and I can not use only 1 Web API server.
If one server was dead, clients can still send request to the other servers. (I need at least 1 web servers response to the clients )
The Question is:
What is the best way to implement the Web API Proxy server?
Is there a better way to handle high volume client requests?
I need at least 1 web server response to the clients. If I have 3 servers and 2 of them are dead.
Please give me some links or documents that can help me.
Thanks
Sounds like you need a reverse proxy. Apache HTTP Server and NGINX can be configured to act as a load balanced reverse proxy.
NGINX documentation: http://nginx.com/resources/admin-guide/reverse-proxy/
Apache HTTP Server documentation: http://httpd.apache.org/docs/2.2/mod/mod_proxy.html
What you are describing is call Load Balancing and Azure (which seems you are using from your comments) provides it out of the box both for Cloud Services and Websites. You should create as many instances as you like under the same cloud service and open a specific port (which will be load balanced) under cloud service endpoints.

how do web application sessions work when running on more than one server?

This is a general question based on how web sessions work across multiple servers, my knowledge around web sessions is not very deep but afaik a web session is typically stored directly in memory of the running web server application so when a request comes in it doesn't have to make database requests to fetch the session data. If a popular website needs multiple servers to handle the level of traffic it is receiving, when a request comes in I assume that it could get directed to any of the servers by some load balancer, but how does the server handling that request get the associated session data if the previous request was handled by a different server? do multi server sites require special session handling infrastructure, or do the load balancers know some how to route requests from the same client to the same server?
This question on ServerFault is the same as this question. with a good answer. in overview there are 3 common methods:
Session information stored in cookies only
Load balancer always directs user to the same machine
Shared backend database or key/value store.
See link for more indepth details of each.

Design of queues with JMS & QPID

I am currently assigned the task of writing an application which will use JMS API to communicate using Apache qpid as the JMS provider.
so basically my application will have multiple instances of a server running.Each server will serve a set of unique desks.so each instance will only have data for the desks it is serving.
There will also be multiple instances of clients each configured by desk again.
Now when the client starts up, it will request the data for the desk it is serving to the servers.The request should only go to the server that has that desk data loaded and the response should only go back to the client who requested the data for that desk.
I am thinking of using queues for this.i am not sure if i should create only one request queue which will be used by all the servers or i should create seperate queues for each server.
For response ,I am planning to use temporary queues.
Not that the request from the client to the server is not very often.Say each client may send around 50 requests a day.
can someone please tell me if this is a good design?

Are WebSockets really meant to be handled by Web servers?

The WebSocket standard hasn't been ratified yet, however from the draft it appears that the technology is meant to be implemented in Web servers. pywebsocket implements a WebSocket server which can be dedicated or loaded as Apache plugin.
So what I am am wondering is: what's the ideal use of WebSockets? Does it make any sense to implement a service using as dedicated WebSocket servers or is it better to rethink it to run on top of WebSocket-enabled Web server?
The WebSocket protocol was designed with three models in mind:
A WebSocket server running completely separately from any web server.
A WebSocket server running separately from a web server, but with traffic proxied to the websocket server from the web server (allowing websocket and HTTP traffic to co-exist on the same port)
A WebSocket server running as a plugin in the web server.
The model you pick really depends on the application you are trying to build and some other constraints that may limit your choices.
For example, if your application is going to be served from a single web server and the WebSocket connection will always be back to that same server, then it probably makes sense to just run the WebSocket server as a plugin/module in the web server.
On the other hand if you have a general WebSocket service that is usable from many different web sites (for example, you could have continuous low-latency traffic updates served from a WebSocket server), then you probably want to run the WebSocket server separate from any web server.
Basically, the tighter the integration between your WebSocket service and your web service, the more likely you will want to run them together and on the same port.
There are some constraints that may force one model or another:
If you control the server(s) but not the incoming firewall rules, then you probably have no choice but to run the WebSocket server on the same port(s) as your HTTP/HTTPS server (e.g. 80 and 443). In which case you will have to use a web server plugin or proxy to the real WebSocket server.
On the other hand, if you do not have super-user permission on the server where you are running the WebSocket server, then you will probably not be able to use ports 80 and 443 (below 1024 is generally a privileged port range) and in that case it really doesn't matter whether you run the HTTP/S and WebSocket servers on the same port or not.
If you have cookie based authentication (such as OAuth) in the web server and you would like to re-use this for the WebSocket connections then you will probably want to run them together (special case of tight integration).

Resources