I have a Log4j custom appender and is working well. I need to modify its behavior such that it will only log based on a value the code obtains from calling a restFul service. Since the value changes once in a while, I want to cache it so I don't have to reach across the wire every time a log event comes in. I tried putting Guava cache but got the following exception:
Exception in thread "main"
com.google.common.util.concurrent.UncheckedExecutionException:
java.lang.IllegalStateException: Recursive load of
I guess this makes sense since this is every LogEvent.
What is the right mechanism to have some kind of cacheing mechanism that's
shared across all Log4j threads?
thanks
Related
Spring lists SO as the only place to ask questions on their community page, which is why I ask this rather generic question here. It may not be the best fit for SO, but, according to Spring's community overview page, there's no other adequate place to ask such questions.
I have a spring boot application built on spring cloud gateway (version 2) which also uses an embedded hazelcast cluster. It runs in multiple instances, which communicate via hazelcast. Everything works fine, except under heavy load. If one instance fails, restarting it is no longer possible.
When the instance is restarted while the cluster of instances is under heavy load, it will start creating and wiring beans, up to some point, after which it will not do anything spring-related anymore. Hazelcast-generated messages are visible in the log (with root log level DEBUG), past that point, but nothing generated by spring or the application itself.
In order to restart that one instance that failed, I need to stop the load generation, wait some 10-15 minutes, then restart the failed instance. Then the new/restarted instance starts up rather quickly, with no problems at all.
The load consists of http requests which get proxied to another application, and is of such nature that it generates a lot of read accesses to hazelcast's distributed storage, but very few writes.
My problem: I have no idea how to debug this. Since the http endpoint never becomes available, there's no way I can query metrics or other actuator information.
So my question is: what tools or mechanisms can I employ to debug this problem? I.e. how can I find out exactly how the boot sequence under heavy load of the other instances of the hazelcast cluster differs from the boot sequence when there is no load at all in the cluster? Once I have this information, the problem is narrowed down enough for me to investigate it further on my own.
I didn't find a way to debug the problem, but had an idea of what might cause it, tried it, and it was a fix.
My application was running as a Kubernetes deployment. A few beans inside the application were relying on a usable CP subsystem during their initialization. Spring's bean initialization process is by necessity sequential and blocking, to account for inter-bean dependencies.
I hypothesized that under heavy load, for whatever reason, the initialization of those beans was blocking forever. As a first experiment, I made that initialization code async, so that Spring can finish bean wiring, even if, until that async part finished too, the instance was unable to perform usable work, to see if that was the problem, at least.
To my surprise, that fully fixed the problem. This way, Spring finished bean wiring, the HZ-dependant initialization also finished rather quickly, when executed async, even under high load, and the instance became usable soon after being started.
I didn't have the time to dig deeper to find out what the precise failure mechanism was. What I believe might have been the problem is the interaction between HZ and K8s. K8s-based discovery works using a K8S service. A pod/instance isn't added to the service until it becomes healthy. If a bean inside the application prevents initialization, the instance is never added to the service. As such, discovery never finds the new/restarted instance. I don't know what effect this might have on the HZ cluster's inner workings.
I have used a Spring Boot Scheduler with #Scheduled annotation along with fixedRateString of 1 sec. This scheduler intermittently stops working for approx 2 min and then starts working automatically. What can be the possible reasons for this behavior and do we have any resolution to this?
Below is the code snippet for the scheduler.
1st) Please read SO guidelines
DO NOT post images of code, data, error messages, etc. - copy or type
the text into the question. Please reserve the use of images for
diagrams or demonstrating rendering bugs, things that are impossible
to describe accurately via text.
2nd) To your problem
You use a xml spring based configuration where you have configured your sheduler. Then you also use the annotation #Scheduled. You should not mix those 2 different types of configuring beans.
Also you use some type of thread synchronization into this method. Probably some thread is stuck outside of the method because of the lock and this messes the functionality that you want.
Clean either the xml configuration or the annotation for scheduling and try with debug to see why the method behaves as it does which most probable would be from what I have mentioned above about the locks and the multiple configurations.
Logback MDC (Mapped Diagnostic Context) is leveraging threadLocal (As far as I know) so that it will be accessible on all the log statements executed by the same thread.
My question is, will logback MDC work in the non blocking IO server-side runtime like Netty or Undertow as it used to work in for example, tomcat?
If yes, how is it working as Netty/Undertow is not following one-thread-per-request unlike tomcat.
I am trying to put a traceID in MDC, so that I can trace all my log from a one transaction trail across multiple microservices/pipeline-listeners in a centralized logging system like Splunk/ELK
I'm not terribly familiar with using netty directly, but I do know that it is possible to use logback MDC with asynchronous code. It isn't easy though.
Basically you need to somehow tie the traceId to the request/connection, and each time you start processing for that connection, set the traceId in the MDC for the current thread.
If you are using a thread pool, you can do this with a custom Executor that gets the current state (ex. traceId) from the current thread local storage, and creates a wrapped Runnable or Callable that sets thread local storage to the previously retrieved value before running. That wrapped runnable is then forwarded to a delegate Executor.
You can see how this is done here: https://github.com/lucidsoftware/java-thread-context/blob/master/thread-context/src/main/java/com/github/threadcontext/PropagatingExecutorService.java
In fact, depending on your needs, you may be able to use that java-thread-context project.
As you have mentioned, the MDC implementation is based on the ThreadLocal, so it's not really applicable to the Reactor.
What I would suggest in case of Reactor (I assume you are using it by the tags) is:
Using the Context (docs here) available in Mono and Flux in order to store the traceId to be available during the execution.
If you're using Spring-Webflux and you want to generate the tradeId and add it to the Context in one place, then you can use the WebFilter (docs here) to do that.
Then you can retrieve the data from the context using some custom
helper methods, like in example by Nicolas Portman, or create your own custom logging format, where you utilize the data from the Context.
I have a large, Spring-based eCommerce framework as a codebase which makes use of pre configured log4j. I can override property values defined in log4j.properties.
I'd like to configure log4j to log asynchronously to console/file, I've attempted to define new/override appenders but am not seeing any output to console and am unsure why.
Is it possible to wrap current log4j into asynchronous calls?
I have a use case, that theoretically seems to me as it would be a solved problem. But i'm not able to find a sure fired implementation.
I've created a RESTful API, using Apache CXF, Spring and Hibernate
This application encompasses a standard Service-Proxy-DAO layered structure
I need to instantiate a custom logger object at my service (or pre-service) layer and initialize a bunch of parameters which will remain constant, for the most part through every call that goes through my application layers and back.
How can i, for every individual service call, initialize this logger object once, and use it across all my layers without having to instantiate it everytime. Either i inject the initialized object in every class i need or something on those lines.
I don't want to use static blocks, or pass the object in method signatures.
Is there anything that i can use as a part of the Spring, CXF or other java framework that allows me to implement this use-case.
EDIT: I would define a transaction as a single call to a web service endpoint, from invocation to response.
ThreadLocal would be an ideal candidate to solve your problem.
UPDATE:
Creating a thread local that is available in all the places where this "shared" reference is required will give all these contexts access to this resource without having to pass the reference around.
see http://www.appneta.com/blog/introduction-to-javas-threadlocal-storage/ - looks like a good explanation of how to use thread local and also deals with your problem space.