WSO2AM version : 1.10.0
i setup API manager after reviewing deployment pattern document.(https://docs.wso2.com/display/CLUSTER44x/API+Manager+Deployment+Patterns)
i using L4 for LB.
i tested failover(gateway worker count 3 -> 2)
The result is strange.
TPS is close to zero for 5sec since one gateway worker process kill.
Why does this result?
What is ur L4 dispatch policy ?
Does L4 set session sticky ?
Related
I have implemented Caching in my Spring Boot REST Application. My policy includes a time based cache eviction strategy, and an update-based cache eviction strategy. I am worried that since I employ a stateless server, if there is a method called to update certain data, and this was handled by server instance A, then the corresponding caches in server instance B, C and D, are not updated as well.
Is this an issue I would face / is there a way to overcome this issue?
This is the oldest problem in software development - cache invalidation when you have multiple servers
One way to handle it is to move your cache out of the individual servers and move them to somewhere shared like another instance that holds the cache entries that every other app refers to or something like redis [centralized cache]
Second way is to do a broadcast message so that each server now knows to invalidate the entry once the data has been modified or deleted - here you run the risk of the message not being processed and thus a stale entry is left in some server[s]
Another option is to have some sort of write ahead log [like kafka or redis streams] which is processed by each server and thus they will all process the events deterministically and have the same cache state
Lmk if you need more help - we can setup some time outside of SO
I have an Artemis broker (2.10.1) running within a Docker container with one address but many (500+) queues. Each queue has a filter attribute, they don't overlap and routing type is multicast.
broker
- address: example
- multicast
- queue: dummy1 (filter: dummy=1)
- queue: dummy2 (filter: dummy=2)
- queue: dummy3 (filter: dummy=3)
- ...
When the client connects the cpu usage for client and broker goes from ~5% up to ~40%, according to htop (~20% normal + ~20% kernel). JMX reports ~10% CPU usage. When changing htop to tree view I can see the ~10% thread and many many 0.x% threads. Queues are empty, I'm neithing producing nor consuming a message. The whole system is (should be) in idle. The client establishes a single connection but one session per queue, resulting in 500+ sessions.
Whats wrong with my configuration? I can't see a reason for having such a CPU usage and load.
Update:
I did some more tests and it turns out that the CPU usage/load only happens if Docker is involved.
broker running in Docker container on host A, pure artemis-core client running (without Docker) on host B: <5% CPU usage
broker running in Docker container on host A, pure artemis-jms client running (without Docker) on host A: <5% CPU usage
broker running in Docker container on host A, Spring Boot client with starter-artemis running (without Docker) on host B: 5-10% CPU usage
broker running in Docker container on host A, Spring Boot client with starter-artemis (which uses JMS) in Docker container running on host A: ~40% CPU usage
I am still doing more research, just wanted to let you know the current state to no longer blame Artemis for showing bad figures.
Btw, an interesting side information: During idle both applications using only an artemis dependency (core & jms) only a ping message every 30 seconds is being exchanged. The application embedded in Spring Boot using the starter-artemis is veeeeeery talkative. Can't yet tell you what this is about, except that I saw something about hornetq forced delivery seq. I assume that because of the amount of messages the CPU usage goes from <5% to 5-10%.
Update 2:
Spring Boot with starter-artemis is talkative because by default it is using the DefaultContainerFactory, which polls. If there arent any messages within a given timeout it issues a force pull command, which is the reason for those hornetq forced delivery seq messages. In my core/jms tests I've used the asynchronous message handler, which is being provided by Spring Boot starter-artemis if you switch to the SimpleContainerFactory.
The broker has been recently improved (eg https://issues.apache.org/jira/browse/ARTEMIS-2990) for scenario like these: I strongly suggest to try a more recent version.
If it won't going to fix your issue I suggest to run https://github.com/jvm-profiling-tools/async-profiler/ to sample CPU usage (it would include GC, compilation and native stack traces too).
Consider that the original address/queue management was using synchronized operations that would make Java threads to heavily contend on hot path: this can cause kernel/system CPU cycles to be spent to manage it (remember: contended java locks are backed by OS mutex) and such CPU usage won't appear on JMX.
I have set up an nginx ingres that routes traffic to specific deployments based on host.
host A --> Service A, host B --> Service B
If I update the config for Deployment A, that update completes in less than 2 seconds. However, my nginx ingress goes down for host A after that and takes 5 to 7 seconds to point to Service A with new pod.
How can I reduce this delay? Is there a way to speed up the performance of the nginx ingress so that it points to the new pod as soon as possible ( preferably less than 3 seconds?)
Thank you!
You can use the nginx.ingress.kubernetes.io/service-upstream annotation to suppress the normal Endpoints behavior and use the Service directly instead. This has better integration with some deployment models but 5-7 seconds is extreme for ingress-nginx to be seeing the Endpoints update. There can be a short gap from when a pod is removed and when ingress-nginx sees the Endpoint removal. You usually fix that with a pre-stop hook that just sleeps for a few seconds to ensure by the time it actually exits, the Endpoint change has been processed everywhere.
We use JBOSS 7 AS and I wondered always if it can be configured so, that the sessions are not lost after restarts or deployments. Are there any possibilities for that, supposing in the session there is no non-serializable data stored ?
Restarts : If you setup a cluster with session replication then your sessions should be served by another jboss instance if the instance you are connected to fails.
Deployments : Deployments generally take a long time (at least more than a few minutes) ; I dont see the point in maintaining session state across deployments.
We installed node queue module on pressflow 6 + varnish. For clearing the varnish cache for node queues, we developed rules using rules,cache action modules. But the issue comes, whenever we update the content,it is reflecting for logged in users, but it is not reflecting for anonymous users.Could you please suggest us for how to clear varnish cache with rules or any custom code?
Thanks,
Raghu
in the CLI:
ban.url .
This will clear the cache.