I have 3 Web API Servers which have the same functionality. I am going to add another Web API server which will be used only for Proxy. So All clients from anywhere and any devices will call Web API Proxy server and the Proxy server will transfer randomly the client requests to the other 3 Web API servers.
I am doing this way because:
There are a lot of client requests in a minute and I can not use only 1 Web API server.
If one server was dead, clients can still send request to the other servers. (I need at least 1 web servers response to the clients )
The Question is:
What is the best way to implement the Web API Proxy server?
Is there a better way to handle high volume client requests?
I need at least 1 web server response to the clients. If I have 3 servers and 2 of them are dead.
Please give me some links or documents that can help me.
Thanks
Sounds like you need a reverse proxy. Apache HTTP Server and NGINX can be configured to act as a load balanced reverse proxy.
NGINX documentation: http://nginx.com/resources/admin-guide/reverse-proxy/
Apache HTTP Server documentation: http://httpd.apache.org/docs/2.2/mod/mod_proxy.html
What you are describing is call Load Balancing and Azure (which seems you are using from your comments) provides it out of the box both for Cloud Services and Websites. You should create as many instances as you like under the same cloud service and open a specific port (which will be load balanced) under cloud service endpoints.
Related
my use case is
I am trying to build an API that takes images as input and does some
image processing operations and return the output JSON back to the
client.
Multiple clients can concurrently request Server and as the server
does take 2 to 3 minutes time to process.
Initially I thought of a normal flask Application, where client
would poll the server for a response on a timely basis
But as Flask-SocketIO can respond back to the client event-based, I
want to use Flask-SocketIO
As the other APIs in my project are hosted on IIS, I wanted to use
the same IIS as the hosting server
my questions are
Can I use Flask-SocketIO for my use case, where API takes 2 to 3
minutes to respond back
If not IIS, how to deploy flask-socketIO on
the windows machine, I have gone through the documentation but I did
not find any deployment strategy for hosting it on windows machine
The best way to achieve concurrency in this case
Thanks in advance
Prasad.
In server-side load balancing, the clients call an intermediate server, which then decides which instance of the actual server (or microservice) to call.
In client-side load balancing also, the clients call an intermediate server (the API gateway - Zuul for instance, configured with a load-balancer - Ribbon for instance and a naming server - Eureka for instance), which then decides which instance of the microservice to call.
Unless we include the API gateway as part of the client, the client still doesn't know the IP address of the exact server to which it should send the request. Seems to me, to be a lot like server-side load-balancing. Is there something I'm missing?
(Including the API gateway as part of client seems weird, since its usually deployed on a different server from the client)
In Client Side load balancing, the Client is doing the heavy lifting of discovery and connection to the origin server. The client may reference a lookup (Eureka, Consul, maybe DDNS), to discover what the end destination is and the registry will dole out a valid origin. The communication is direct, client to server without a middle man.
In Server Side load balancing, the client is dumb, and makes a call to a predetermined address (usually DNS or static IP). That device then either proxies (TCP or protocol level) the connection to the origin server based on either a lookup, heartbeat, etc.
I've seen benefits in client side routing in that as long as you have IP connectivity between client and server, the work of the infrastructure is trivial to add new services, locations, products, apps, etc. As long as the new server can "register" with the registry, and the client has IP access to the server, it just works and IT does not have to be involved in rolling out your new service.
The drawback is it makes the client a little more heavy, it does require IP access direct from client to server, and may be confusing for traditional IT folks and auditors. Each client needs to be aware of the registry and have code to make calls (or use a sidecar/sidekick).
I've seen it in practice where a group started to transition their apps to a Docker environment, and they were able to run their Docker based apps along side their non-docker versions at the same time w/o having to get IT involved and do a lot of experimentation and testing quickly and autonomously.
If you have autonomous teams, are highly advanced on the devops spectrum, and have a lot of trust with your teams, Client Side routing and load balancing may be a good experience for you.
I want to build a asp.net web api server that can re-route incoming HTTP request to other web api servers. Main server will be master and its only job will be accepting and routing to other servers. Slave servers will inform master server when they started and ready to accept http requests. Slave servers must not only inform they alive but they should send which api they support. I think I have to re-map routing tables in master server in runtime. Is it possible?
This seems like load balancing according to functionality . Is there any way to do this ? I have to write a load balancer for web api any suggestion is welcome.
I want to configure Varnish to use HTTPS(!) services as backend services. The key here is the SSL part of the connection to the backend service! I have limited control over those HTTPS backend services (think of them as SaaS services hosted in the cloud).
It's a setup like this: User-Agent -> AWS ELB as SSL terminator -> Varnish in AWS -> HTTPS SaaS services in the cloud
The reasons for that are as follows:
- I want to use Varnish ESI to decorate the SaaS service UI with my own custom page header & footer.
- By having all requests going through Varnish, I get additional analytics data about the SaaS service which I wouldn't get otherwise
- I can use Varnish to re-write URLs of the SaaS service effectively hiding the SaaS service URL from the end-users
I am able to use AWS ELB as SSL terminator towards the user-agent, but how do I get Varnish to access the HTTPS SaaS service as an origin server?
Background:
I work on a web portal where we will present a number of different services (all services have their own existing UI, i.e. they don't have headless RESP APIs!) to our customers. The main thing that pulls all those services together is a common page header and footer (page header shows top level navigation and login/username logout).
The types of services we have are as follows, all have their own UI layer which we don't want to replicate:
- White-labeled 3rd party SaaS service (think of e.g. Zendesk or Salesforce), hosted in the cloud
- In-house developed JavaEE/Spring services which are hosted in AWS
- Services that other teams in our company developed, but they are hosted in our own data center
Adding ESI includes is fine for each of those services, but I don't want to have to duplicate work of re-implementing the page header/footer multiple times for each service.
I ran into a similar requirement recently where the desired back-end needed to be accessed using https.
There are, of course, a lot of objections that could be raised as to why this is the wrong way to do it... but in this case, I was constrained by the fact that I needed the data encrypted to the back-end, a significant geographical distance, and the fact that a VPN was not possible because of the ownership and control of the various systems.
Here is a workaround that from my limited testing seems to have potential to solve this problem using stunnel4.
Sample lines from the configuration:
client = yes
[mysslconnect]
accept = 8888
connect = dest.in.ation.host.or.ip:443
Now, stunnel4 is listening to port 8888 on my local (varnish) machine, and each time an incoming connection arrives, it sets up an ssl connection to port 443 on the remote system.
A connection to 127.0.0.1 port 8888 on the local server allows me to speak cleartext HTTP to the destination back-end server, and this occurs over an SSL connection that is actually managed by stunnel4... so configuring varnish to use 127.0.0.1:8888 as the back-end does what I intend because varnish thinks it's speaking to an ordinary http server, unaware of what stunnel4 is doing on its behalf.
I can't vouch for the scalability or reliability of this, since I've only just not gotten it working, but so far it does seem to be a viable workaround to this limitation in varnish.
Accessing HTTPS backends in Varnish isn't supported. Varnish speaks HTTP to the backends.
If you want to access HTTPS backend content you'll have to proxy it through another daemon/proxy that adds/strips HTTPS. There are quite a few choices for this, one of which is stunnel which is tried and tested.
From what you are describing (rewriting content) I'd say that you are pretty close to using the wrong hammer. Varnish might not be the best tool for this, have you considered gluing things together with mod_rewrite/mod_substitude instead?
This is supported by Varnish Cache Plus which isn't free.
backend default {
.host = "backend.example.com";
.port = "443";
.ssl = 1; # Turn on SSL support
.ssl_sni = 1; # Use SNI extension (default: 1)
.ssl_verify_peer = 1; # Verify the peer's certificate chain (default: 1)
.ssl_verify_host = 1; # Verify the host name in the peer's certificate (default: 0)
}
The WebSocket standard hasn't been ratified yet, however from the draft it appears that the technology is meant to be implemented in Web servers. pywebsocket implements a WebSocket server which can be dedicated or loaded as Apache plugin.
So what I am am wondering is: what's the ideal use of WebSockets? Does it make any sense to implement a service using as dedicated WebSocket servers or is it better to rethink it to run on top of WebSocket-enabled Web server?
The WebSocket protocol was designed with three models in mind:
A WebSocket server running completely separately from any web server.
A WebSocket server running separately from a web server, but with traffic proxied to the websocket server from the web server (allowing websocket and HTTP traffic to co-exist on the same port)
A WebSocket server running as a plugin in the web server.
The model you pick really depends on the application you are trying to build and some other constraints that may limit your choices.
For example, if your application is going to be served from a single web server and the WebSocket connection will always be back to that same server, then it probably makes sense to just run the WebSocket server as a plugin/module in the web server.
On the other hand if you have a general WebSocket service that is usable from many different web sites (for example, you could have continuous low-latency traffic updates served from a WebSocket server), then you probably want to run the WebSocket server separate from any web server.
Basically, the tighter the integration between your WebSocket service and your web service, the more likely you will want to run them together and on the same port.
There are some constraints that may force one model or another:
If you control the server(s) but not the incoming firewall rules, then you probably have no choice but to run the WebSocket server on the same port(s) as your HTTP/HTTPS server (e.g. 80 and 443). In which case you will have to use a web server plugin or proxy to the real WebSocket server.
On the other hand, if you do not have super-user permission on the server where you are running the WebSocket server, then you will probably not be able to use ports 80 and 443 (below 1024 is generally a privileged port range) and in that case it really doesn't matter whether you run the HTTP/S and WebSocket servers on the same port or not.
If you have cookie based authentication (such as OAuth) in the web server and you would like to re-use this for the WebSocket connections then you will probably want to run them together (special case of tight integration).