Ignite client failover - client

We have this Ignite cluster configuration: there are several servers + 1 client, which acts as a balancer. For example, IP addresses are:
server1 - 192.168.100.1
server2 - 192.168.100.2
server3 - 192.168.100.3
client - 192.168.100.100
So requests go to the client - for example http://192.168.100.100:8082/request1
Then the client sends the Distributed Computing Task to the cluster - the calculations are performed on one of the servers, so on 192.168.100.1 or 192.168.100.2 or 192.168.100.3. Results of calculations then return to client, then client finally sends response to the request.
There is no problem if one of the servers crashes - client would be knowing about that and wouldn't send Task on that server. But there is a problem if the client crashes - all servers work, but the address http://192.168.100.100:8080/request1 is not available.
What can I do about it? Can client be failover? Does Ignite have something for that? If not, what other technology/software can I use?

Related

OKHttp3: how to retry another IP address if one is unreachable

does OkHttp3 support the following case:
x.x.x.x myapp.com
y.y.y.y myapp.com
we have two IPs for one hostname, looks like OkHttpClient always retry the first IP address instead of trying another available IP address.
does retryOnConnectionFailure(true) support this? from the doc, by default it should support this?
Configure this client to retry or not when a connectivity problem is encountered. By default, this client silently recovers from the following problems:
Unreachable IP addresses. If the URL’s host has multiple IP
addresses, failure to reach any individual IP address doesn’t fail
the overall request. This can increase availability of multi-homed
services.
Stale pooled connections. The ConnectionPool reuses sockets to
decrease request latency, but these connections will occasionally
time out.
Unreachable proxy servers. A ProxySelector can be used to attempt
multiple proxy servers in sequence, eventually falling back to a
direct connection.
Set this to false to avoid retrying requests when doing so is
destructive. In this case the calling application should do its own
recovery of connectivity failures.
OkHttp will try both in sequence.

Reserved workers for each virtualhost for Apache prefork

I am supervising the Apache prefork server on Cent OS.
Apache hosts several ASP.NET Core services using proxy.
Usually apache serves 20 requests per second and server limit is 920 workers.
In case of busyness of one hosted service, incomming requests occupy all apache workers very fast.
In consequence of this, a request that are dedicated for another service i.e. virtualhost has no free worker.
Is it possible to reserve defined amount of workers for specific virtualhost?
Something like this:
subdomain1.myserver.com:443 - 200 Workers
subdomain2.myserver.com:443 - 200 Workers
subdomain3.myserver.com:443 - 200 Workers
subdomain4.myserver.com:443 - 200 Workers
myserver.com:4800 - 70 Workers
myserver.com:443 - 50 Workers
Thank you

AWS Elastic Load Balancer not responding from Internet connection

I have created one EC2 instance (as part of the provision of a Tomcat Beanstalk instance). Now I need to configure HTTPS connection to the EC2 instance. As per the Beanstalk documentation, the easiest way is to configure a load balancer that interacts with browsers using HTTPS and that routes traffic to the EC2 instance using HTTP.
So I configured a load balancer under the EC2 management console. After the configuration, I tried to ping the public DNS name of the load balancer or the resolved IP address. The target is reachable but does not produce any response, as shown below:
ping 13.54.72.179
PING 13.54.72.179 (13.54.72.179) 56(84) bytes of data.
^C
13.54.72.179 ping statistics ---
7 packets transmitted, 0 received, 100% packet loss, time 6139ms
I carefully checked all the configurations, as per the load balancer configuration and trouble-shooting documentation. All seem to have been configured properly.
Target group: the target group has the healthy state in monitoring tab.
VPC: the load balancer availability zone and the EC2 instance are in
the same VPC zone. Also in the route table, there is an internet
gateway associated to 0.0.0.0/0 destination.
load balancer listeners: both HTTP and HTTPS listeners are
configured. Load balancer is also configured for internet-facing
connection.
Security group for load balancer: for inbound traffic, both
HTTP/HTTPS and TCP protocol are configured, accepting all sources;
for outbound traffic: all protocols to all destinations are allowed.
Security group for EC2: for the purpose of testing, we enable all
traffic for all sources in inbound traffic.
I researched a few forum threads about the "load balancer not responding" topic and checked the configurations they mentioned. However, none of them worked for me.
So I am at loss now. Can someone enlighten me where I might have missed in configuring the load balancer? Or what I need to do for trouble-shooting?

Tidtcpserver listening on multiports?

alright i am trying to understand this approach , lets say i have run 2 servers
Server A is on Ip 1.1.1.1 and port 36663
Server B is on ip 2.2.2.2 and port 54223
i am asking to be able to understand this approach
can i make clients on server A be able to communicate with clients on Server B ?
as example a client is connecting on Server A and want to send some data to a client who connecting to server B is this can be done using indy tcp server ?
if the answer is yes an example will be much helpful to fully understand this approach .
i have 2 servers on different machine
1 machine have some slow network issue and the other have good network.
the logic here is when the client to serverA that takes more than 20 seconds to connect, during this 20 seconds try to reconnect to other server ip and be able to communicate with the client that already coonnected on serverA
TIdTCPServer has a Bindings property, which is a collection of IP/Port pairs that the server listens on. You can have a single TIdTCPServer object listening on multiple IP/Port pairs, or you can use multiple TIdTCPServer objects listening on different pairs, on the same machine.
Either way, the connected clients are stored in the TIdTCPServer.Contexts property.
When a client wants to send data to another client, regardless of which server IP/Port it is connected to, all you have to do is iterate through the Contexts list of the appropriate TIdTCPServer object until you find the TIdContext object of the target client, and then you will have access to its Connection.IOHandler property.
On the other hand, if you have separate TIdTCPServer objects running on different machines, clients cannot directly communicate with clients on another server. You would have to establish a connection between the two servers and then you can proxy any client-to-client data through that connection as needed.

HAProxy load balancing MySQL servers

I have a database cluster of 3 nodes using Percona XtraDB. The three nodes are configured on three different systems. I have used HAProxy load balancer to pass requests to these nodes.
Two of the 3 nodes are configured as backup in HAProxy. When I fire a request to the load balancer connection URL, I can see the request go to node A by default. If node A is down and I request a new database connection, I see the request being routed to node B. This is as per the desired design.
However, if a connection request is sent to HAProxy using a Java program (jdbc URL), the request is routed to node A, after serving a few requests if node A goes down, I wish node B/ node C to serve the request. In the current scenario I see "Connection Failed".
Is there any configuration which will ensure that in case of failure of a node, the database connection will not fail and future requests will be routed to the next available node?
My current HAProxy configuration file is as follows:
global
stats socket /var/run/haproxy.sock mode 0600 level admin
log 127.0.0.1 local2 debug
#chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
daemon
defaults
mode tcp
log global
option tcplog
timeout connect 10000 # default 10 second time out if a backend is not found
timeout client 300000
timeout server 300000
maxconn 20000
# For Admin GUI
listen stats
bind :8080
mode http
stats enable
stats uri /stats
listen mysql *:3306
mode tcp
balance roundrobin
option mysql-check user haproxyUser
option log-health-checks
server MySQL-NodeA <ip-address>:3306 check
server MySQL-NodeB <ip-address>:3306 check backup
server MySQL-NodeC <ip-address>:3306 check backup
Mode tcp under listen *:3306 cannot be use. Check before post here using this command:
haproxy -f /etc/haproxy.cfg -V

Resources