How can i lower the zuul-proxy delay? - spring

I have an eureka-server, eureka-client-1, eureka-client-2 and zuul-proxy deployed.
The eureka-client-* are changing from {status:"UP"} to {status:"DOWN"}.
How can I lower the delay so that zuul-proxy always points to one of the eureka-client-* that is {status:"UP"}?

There are several options to reduce the delay.
The values in below are just for example.
Eureka Server Response Cache
eureka server property
eureka.server.response-cache-update-interval-ms: 5000 (default 30000)
Eureka server has response cache for the performance, so it doesn't return actual values that it knows during above milliseconds. You can reduce this delay with above property.
Zuul Ribbon Cache
zuul property
ribbon.ServerListRefreshInterval: 5000 (default 30000)
As you know, zuul is using Ribbon for load-balancing and Ribbon has cache for server list. The default value is 30 seconds. You can adjust this value with above property to reduce ribbon cache time.
Eureka Client Cache
eureka client property (In your case, it's zuul property)
eureka.client.registryFetchIntervalSeconds : 30
Zuul is another eureka client and it is using eureka client to retrieve available server list for a specific service. The default interval for this is 30 seconds and you can adjust this value with the above property.
Lease Expiration Duration
eureka instance properties (In your case, for your eureka-client-1, eureka-client-2)
eureka.instance.lease-expiration-duration-in-seconds: 60 (default 90)
If Eureka server didn't receive heartbeat during above seconds, eureka server removes that instance from available server list. Each your client sends a heartbeat in every 30 seconds (that value is also configurable, but don't change this property because eureka server has some codes based on the assumption of this 30 seconds). So 90 seconds means that eureka instance will be removed from the list if it fails to send (or server fails to receive) heartbeats at three consecutive times. You can reduce this duration, but you may have some risks of that. This property must exist on your eureka client side.
There is a great article about this information here : http://blog.abhijitsarkar.org/technical/netflix-eureka/

Related

How to limit number of HTTP Connections for a rest web service

We want to limit the number of connections for our rest web service.
We are using spring boot with jetty as server.
We have configured below settings :
#rate limit connections
server.jetty.acceptors=1
server.jetty.selectors=1
#connection time out in milliseconds
server.connection-timeout=-1
Now, as you can see that there is no idle timeout applicable for connections.
Which means a connection once open will remain active until it is explicitly closed.
So, with this settings, my understanding is that if I open more then 1 connection, then I should not get any response because the connection limit is only 1.
But this does not seem to be working. Response is sent to each request.
I am sending request with 3 different clients. I have verified the ip address and ports. They all are different for 3 clients. But all 3 remains active once connection is established.
Any experts to guide on the same?
Setting the acceptors and selectors to 1 will not limit the max number of connections.
I suggest you look at using either the jetty QoS filter, or the Connection Limit jetty module.

Will spring eureka client send heartbeat to all discovery servers?

Setup: 2 eureka servers replicate. and 1 eureka client set the default zone in application.yml as localhost:8761, localhost:8762.
Question:
1.Will eureka client send heartbeat to both 8761 and 8762?
Thanks,
Young
No, it will not send the heart beat to both 8761 and 8762. How it works is- We provide the list of Eureka servers. All the clients will pick first server from the list (in your case 8761) and start sending the heartbeat. All the clients will switch to other server only in case first Eureka server dies.
Second server will always get a copy of registry from first server.

GCP Cloud Endpoints latency

The product overview of Cloud Endpoints states:
Extensible Service Proxy delivers security and insight in less than 1ms per call.
But, I'm observing more than 10ms (maybe 100ms from time to time) added latency.
Our server settings are:
we have a GKE cluster, which has:
a Kubernetes deployment for pods, each of which has an ESP container and our own container, which serves a gRPC service
a Kubernetes service (of LoadBalancer type) whose target refers to the ESP container
we have an endpoint configuration for the gRPC service, which has only the basic stuff, as below.
we issued an API key for clients
We had a client program in another GKE cluster in the same zone for this experiment.
With this setting, our experiments showed:
with 15ms timeout on client's end, more than 95% calls were timed out
on GCP's endpoints dashboard, majority of requests took more than 100ms
on stackdriver trace, all the latency belongs to "Backend"
when measured at our own container, the latency was below 5ms
The server's CPU load was very low (below 10%) and there is no sign of overloading at that time.
Assuming gRPC does not add much latency, we think the latency was probably from the ESP.
So, we ran another experiment with ESP-bypassed:
we modified the Kubernetes Service in such a way that it refers to our own container, not the ESP container
After this fix, the latency measured at the client was dropped to 5ms.
So, if our experiments were correct, it seems the ESP container adds latency, far from 1ms, which is advertised at the product overview. Are we missing something?
Endpoint configuration:
type: google.api.Service
config_version: 3
name: foo.endpoints.bar.cloud.goog
title: foo in bar
apis:
- name: com.bar.FooService

504 Gateway timeout for ELB

I have AWS Elastic load balancer which has two healthy instances. If I make a POST request, it gets accepted. But consequent requests throw 504 gateway timeout error. After 5-10 minutes, it accepts 2-4 requests, and then start throwing 504 error. I try to reach is Spring Boot Application hosted on these two instances. There are no application level timeouts. Further time duration between failed and accepted requests vary, so I believe no fixed timeout configuration setting is causing an issue. How can I resolve this?

WebSphere http outbound socket configuration

We are running a performance test on a WebSphere 8.5.5.1 application server that calls many external SOAP services. Running netstat we notice that the server is only creating a max of 50 outbound connections.
We are trying to increase this value but cannot find the correct property. We have increased the Default pool but this doesn't seem to apply.
The WebContainer pool size is also set to higher than 50 and we can see this pool grow. Is there some hidden pool that is defaulting to 50?
You'll need to configure com.ibm.websphere.webservices.http.maxConnection property based on this:
http://www-01.ibm.com/support/knowledgecenter/api/content/SSEQTP_8.5.5/com.ibm.websphere.base.iseries.doc/ae/rwbs_httptransportprop.html

Resources